advertisement
AI in Facebook is helping to prevent suicide
With over 2.5 billion daily users on Facebook, the platform is making considerable strides to make sure its users are…
With over 2.5 billion daily users on Facebook, the platform is making considerable strides to make sure its users are safe, whether it be mentally or even physically. How are all those 2.5 billion people safe? The stark question remains. Safe, in the context of how policies can govern a wide array of behaviors and activities.
Facebook takes a 5 pronged approach to safety perspectives; policies, tools, help, partnerships and feedback. With a focus on users being proactive instead of reactive, the suicide prevention tool has grown and adapted. In that, the very dynamic of suicide is something Facebook should not be inherently be worried about, but it is. Not only because suicide is contemplated on the platform itself, but because the platform has a need to guarantee the safety and security of all their users, whether it be online or offline.
The way in which Facebook dwells on the aspect of suicide prevention, is through sending in depth, well placed material through AI. Following a string of suicides that were live-streamed on the platform, the effort to use an algorithm to detect signs of potential self-harm sought to proactively address a serious problem.
advertisement
The algorithm touches nearly every post on Facebook, rating each piece of content on a scale from zero to one, with one expressing the highest likelihood of “imminent harm,” according to Mercy Ndegwa, head of Public Policy at Facebook East Africa.
Once a post is flagged for potential suicide risk, it’s sent to Facebook’s team of content moderators. The content moderators are said to be well trained and versed on picking up hints and signals about suicide prevention.
Once reviewed, two outreach actions can take place. Reviewers can either send the user suicide resource information or contact emergency responders. In comes AI, after all posts are scanned for patterns of suicidal thoughts, the necessary mental health resources are sent to the user at risk. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.
advertisement
Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7,
Plus, the AI is configured in a way that makes it less conventional and more personal. Which is especially vital when dealing with somebody who is on edge. by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like “are you OK?” and “Do you need help?”
Through the combination of AI, human moderators and crowdsourced reports, Facebook is playing one of the most instrumental roles to prevent such tragedies.Unfortunately, live broadcasts in particular have the power to wrongly glorify suicide, hence the necessary new precautions, and also to affect a large audience, as everyone sees the content simultaneously unlike recorded Facebook videos that can be flagged and brought down before they’re viewed by many people.
advertisement
As mentioned earlier, With 2 billion users, it’s good to see Facebook stepping up here. Not only has Facebook created a way for users to get in touch with and care for each other. Creating a ubiquitous global communication utility comes with responsibilities beyond those of most tech companies, which Facebook seems to be coming to terms with.