It will be able to detect when someone may be suicidal from their posts
In the UK, female suicide rates are at their highest in a decade, while men die by suicide in numbers three times that. It’s a prevalent issue: people feel alone and hopeless even in an increasingly connected – or socially detached, however you look at it – world. It’s something social network giants have been making an effort to tackle.
In an expansion of its earlier limited run for suicide and self-harm prevention tools, Facebook will begin mass-implementing pattern recognition for posts and live videos to detect when someone could be suicial. Currently the tools are only available in the U.S, but they will soon roll out globally. First responders will also be notified when the need for them arises. In the past month alone, Facebook has alerted over 100 first responders to potentially fatal posts.
Facebook’s VP of product management Guy Rosen says that, due to the algorithms picking up on comments like “are you okay” and “can I help”, Facebook has zoned in on posts and videos it previously might have missed.
In a Facebook post announcing the rollout, CEO Mark Zuckerberg wrote: “with all the fear about how AI may be harmful in the future, it's good to remind ourselves how AI is actually helping save people's lives today” and explaining in more detail how the technology workedthere's a lot more we can do to improve this further. Today, these AI tools mostly use pattern recognition to identify signals –like comments asking if someone is okay – and then quickly report them to our teams working 24/7 around the world to get people help within minutes.”
Zuckerberg added, “we’re going to keep working closely with our partners at Save.org, National Suicide Prevention Lifeline ‘1-800-273-TALK (8255)’, Forefront Suicide Prevent, and with first responders to keep improving. If we can use AI to help people be there for their family and friends, that's an important and positive step forward”.
Last year, Instagram launched a suicide prevention tool that allowed users to anonymously flag posts about self-harm and suicide. Their tool notifies the person who posted that someone thinks they’re going through a difficult time, and offer you help. The flagged posts on Instagram are then reviewed by a team of people.
Facebook’s chief security officer Alex Stamos addressed concerns about the social network taking its level of surveillance too far. In a Twitter post, he acknowledged the “creepy/scary/malicious” risk AI can of course pose, adding that “it’s important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in”.
While this technology is being used for good, the idea of how far Facebook (or someone else) could take it and invade our privacy is a very real fear. It also isn’t clear how effective the technology is at detecting legitimate suicidal thoughts, or whether there’ll be a lot of false flags when people use similar language to mean something else entirely – however, if it manages to save anyone, it’s worth a few hiccups.
The creepy/scary/malicious use of AI will be a risk forever, which is why it's important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in. Also, Guy Rosen and team are amazing, great opportunity for ML engs to have impact. https://t.co/N9trF5X9iM— Alex Stamos (@alexstamos) November 27, 2017