Домой United States USA — Events How data scientists are using AI for suicide prevention

How data scientists are using AI for suicide prevention

313
0
ПОДЕЛИТЬСЯ

The Crisis Text Line uses machine learning to figure out who’s at risk and when to intervene.
When horrible news — like the deaths by suicide of chef, author, and TV star Anthony Bourdain and fashion designer Kate Spade, or the 2015 Paris attacks — breaks, crisis counseling services often get deluged with calls from people in despair. Deciding whom to help first can be a life-or-death decision.
At the Crisis Text Line, a text messaging-based crisis counseling hotline, these deluges have the potential to overwhelm the human staff.
So data scientists at Crisis Text Line are using machine learning, a type of artificial intelligence, to pull out the words and emojis that can signal a person at higher risk of suicide ideation or self-harm. The computer tells them who on hold needs to jump to the front of the line to be helped.
They can do this because Crisis Text Line does something radical for a crisis counseling service: It collects a massive amount of data on the 30 million texts it has exchanged with users. While Netflix and Amazon are collecting data on tastes and shopping habits, the Crisis Text Line is collecting data on despair.
The data, some of which is available here, has turned up all kinds of interesting insights on mental health. For instance, Wednesday is the most anxiety-provoking day of the week. Crises involving self-harm often happen in the darkest hours of the night.
CTL started in 2013 to serve people who may be uncomfortable talking about their problems aloud or who are just more likely to text than call. Anyone in the United States can text the number “741741” and be connected with a crisis counselor. “We have around 10 active rescues per day, where we actually send out emergency services to intervene in an active suicide attempt,” Bob Filbin, the chief data scientist at CTL, told me last year.
Overall, the state of suicide prevention in this country is extremely frustrating. The Centers for Disease Control and Prevention recently reported that between 1999 and 2016, almost every state saw an increase in suicide. Twenty-five states saw increases of 30 percent or more. Yet even the best mental health clinicians have trouble understanding who is most at risk for self-harm.
«We do not yet possess a single test, or panel of tests that accurately identifies the emergence of a suicide crisis,» a 2012 article in Psychotherapy explains. And that’s still true.
Doctors understand the risks for suicide ideation better than they understand the risk for physical self-harm. Complicating matters, the CDC finds 54 percent of suicides involve persons with no known mental health issues.
But we can do better. And that’s where data scientists like Filbin think they can help fill in the gaps, by searching through reams of data to determine who is at greatest risk and when to intervene. Even small insights help. For example, the crisis text line finds when a person mentions a household drug, like “Advil,” it’s more predictive of risk than if they used a word like “cut.”
Sending help to people in crisis is just the start. The hotline hopes its data could one day actually help predict and prevent instances of self-harm from happening in the first place. In 2017, I talked to Filbin about what data science and artificial intelligence can learn about how to help people. This conversation has been edited for length and clarity.
Tell me about the service Crisis Text Line provides.
The idea is that a person in crisis can reach out to us — no matter the issue, no matter where they are — 24/7 via text and get connected to a volunteer crisis counselor who has been trained. The great thing about that is people have their phones everywhere: You can be in school, you can be at work, and whenever a crisis occurs, we want to be immediately accessible at the time of crisis.
[Suicide attempt] is the greatest type of risk that we’re trying to prevent.
You’re also collecting data on these interactions. Why is that necessary to run the service?
We have 33 million messages exchanged with texters in crisis. We have had over 5,300 active rescues [where they’ve dispatched emergency services to someone attempting suicide] and the entire message and conversations associated with those.
Our data gives the rich context around why a particular crisis event happened. We get both the cause and the effect, and we get how they actually talk about these issues. I think it’s going to provide a lot more context on how can we actually spot these events and prevent them.
How can we spot these events before they occur? Seeing the actual language that people use is going to be critical to [answer] that.
From the very beginning, we believed in the idea that our data could help to improve the crisis space as a whole. By collecting this data and then sharing it with the public, with policymakers, with academic researchers… it could provide value to people in crisis whether or not they actually used our service.
So, someone in crisis texts your service. I’m curious about the specific data you’re collecting from that interaction.
There are three types of data we’re collecting:
The conversation — the exchange between a texter and a crisis counselor, and then a lot of metadata around those conversations (timestamps; the people who were involved: the crisis counselor, the texter). If the texter comes back, we know, okay, this is a texter that has used our service before, or this crisis counselor has had other [interactions with the texter].
Second: After the conversation, we ask our crisis counselors [questions like], “What issues came up in the conversation?» and, «What was the level of risk?»
The third piece: a set of survey questions to the texter asking for feedback: Was this conversation helpful? What type of help did you experience? What did you find valuable? What would you hope would be different for other people in crisis? Or the same.
So how has this data helped you do your job better?
We’ve seen problems arise, and then we think about how to use data to solve these problems.
So one problem is spike events.
A spike is when we see a huge influx in texter demand — this happens to crisis centers all the time. When Robin Williams died by suicide, around the election, around the Paris terrorist attacks in 2015, or perhaps somebody who used the service and found it beneficial, and that goes viral — we’ll have a big spike event. Our daily volume will more than double.
So how do you respond to that?
Traditionally, crisis centers respond to people in the order in which they come into the queue. But if you double your volume instantly, that’s going to lead to long wait times.
We want to help the people who have the highest-severity cases first: We want to help somebody who’s feeling imminently suicidal before somebody who’s having trouble with their girlfriend or boyfriend or something.
We trained an algorithm. We asked, “What do texters say at the very beginning of a conversation that is indicative of an active rescue?”
So we can triage and prioritize texters who say the most severe things.
When you say you’re “asking” these questions, you mean you’re using a computer? Machine learning or some type of AI?
We’re asking a computer to figure it out.
[We ask it] basically, look at active rescue conversations and is there anything different about the initial messages that texters send in those.
So what type of words did the computer pick out that indicated imminent risk?
Before we used the computer, we had a list of 50 words that [we thought] were probably indicative of high risk. Words like “die,” “cut,” “suicide,” “kill,” etc.
When a data scientist ran the analysis, he found thousands of words and phrases indicative of an active rescue that are actually more predictive.
Words like “Ibuprofen” and “Advil” and other associated words [i.e., common household drugs] were 14 times as indicative of suicide as a predictor.
Wow.
Even the crying face emoticon — that’s 11 times as predictive as the word “suicide” that somebody’s going to need an active rescue.

Continue reading...