Pin It

Facebook’s artificial intelligence can spot suicidal users

The social network roll out more plans for their live-streaming feature that could save lives

Facebook plans to make use of artificial intelligence that could flag up users of the site who may be at risk of taking their own lives. An algorithm has been developed that would work across the live-streaming feature and Messenger.

Describing the new feature in a blog post, Facebook relates that the algorithm will identify suicidal tendencies in users’ posts and in the comments their friends leave. Once the human team at Facebook review what the AI flags up with urgency, they contact the user with advice on how to seek help.

Before this, users could report suicide risk to Facebook through a post’s report button.

The AI would respond to specific phrases and words, trained with examples of posts that had been previously flagged for the risk that someone would attempt to take their own life. It’s similar to the algorithm already in use to identify terrorists. 

Dr John Draper from the U.S National Suicide Prevention Helpline told the BBC: “The more we can mobilise the support network of an individual in distress to help them, the more likely they are to get help. The question is how we can do that in a way that doesn't feel invasive. I would say though that what they are now offering is a huge step forward.”

In the U.S, Facebook users can use the Messenger tool to contact crisis counselors. Back in January, a 14-year-old died by suicide, and broadcasted it on Facebook Live.

Last year, Dazed reported on live-streaming platform Periscope, and how it disturbingly became a network broadcasting suicide, rape, and physical assault in the space of a few months. A 19-year-old woman in France jumped in front of a train after alleging that a partner had raped and abused her. Her suicide was watched by over 1,000 people.

Facebook explained that their efforts hope to stop a tragedy ever happening and address issues as they happen, so once a stream on Facebook Live has been flagged for urgent review by the human team, a message can go over the top of the stream that will give suggestions for how other users can help.

“Some might say we should cut off the stream of the video the moment there is a hint of somebody talking about suicide,” said Jennifer Guadagno, Facebook's lead researcher on the AI project. “But what the experts emphasised was that cutting off the stream too early would remove the opportunity for people to reach out and offer support. So, this opens up the ability for friends and family to reach out to a person in distress at the time they may really need it the most.”

Right now, it’s just being tested in the United States, with plans to roll it out worldwide.