ARTIFICIAL INTELLIGENCE (A.I.) HAS MADE ITS WAY INTO THE FACEBOOK
As vague and artificial as our online personas can be, Facebook still remains a popular social media platform for venting and grumbling about our day-to-day high and lows. With time, Facebook has evolved by integrating Machine Learning Algorithms and Artificial Intelligence to understand users in a better way. In addition to the various uses, Facebook has now started using artificial intelligence to detect suicidal users.
Facebook launched live-video streaming last year as a new feature for users to share their experiences in real time. Since then users have captured the more painful moments of reality including depression, anxiety signals, fights, murders, racism, and even suicides.
The social network has now announced that they are adding suicide-prevention tools into Facebook Live that will allow users who are watching the live video to directly report that video to Facebook and reach out to the person.
This new initiative will help Facebook prevent events such as the case of Brazilian policeman who killed himself live on Facebook or the 12–years-old girl who also committed suicide live on another social network.
Even though suicide seems a minor problem, each year around 800,000 people commit suicide according to the World Health Organization (WHO). The number of people who try to kill themselves but do not succeed is much bigger. The problem becomes more serious because of the taboo involved in the subject.
SUICIDE – STILL A TABOO OF A TOPIC TO DISCUSS ABOUT
Goethe published the book “The Sorrows of Young Werther”, in 1774, and it triggered a wave of suicides (which became known as “werther effect”). Since then people and media became very cautious about talking about the subject, which often leads to avoiding the issue.
This effect, however, seems more related to the way the subject is approached than the subject itself. There are some reports that implies that talking about suicide in a correct way can reduce the number of suicides. This positive effect was named Papageno effect in tribute of the character of Mozart’s opera “Magic Flute”.
This effect is much less studied then Werthers’, so it has much less effect on public opinion, which in its majority still prefers not to discuss the subject openly.
HOW THE SUICIDE PREVENTION A.I. TOOL WORKS
The idea behind the new usage of AI is that the computer makes a scan on the content published by the users and recognizes patterns of sadness or depression in posts and comments. When this pattern is found, the case is redirected to a human review team that analyzes the case.
If the team confirms the verdict of the computer, the person is contacted to be offered help. In that sense, Facebook also established partnership with US mental health organizations in order to help people who need support to find it via the social network.
On the other hand, in live stream videos, Facebook is relying on a system of notification and advising. When someone sees a suspect behavior, he/she can notify the Facebook´s team and get advice on how to act (automatically and from Facebook´s team).
Facebook chooses not to cut off the transmission because it believes that in that way an opportunity of providing help for a person in need would be wasted.
In his recent notes, “Building Global Community” CEO Mark Zuckerberg addressed the need to detect signs of suicidal users to offer help before it’s too late.
“There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner,” Zuckerberg wrote. “To prevent harm, we can build social infrastructure to help our community identify problems before they happen.”
Even though the director of the US National Suicide Prevention Lifeline, Dr John Draper, sees this Facebook´s effort as positive, he told BBC that he expect the company to do even more.
In his opinion, Facebook should contact friends and family of the potential suicidal. In response to that, the company argues that it would not be appropriate, as they do not know the dynamic of the relations between people. They also have concerns about privacy.
According to a statement by Vanessa Callison-Burch, Facebook product manager, “The AI is actually more accurate than the reports from people that are flagged as suicide and self injury.“
Despite this controversy, the results have been very promising.
This project is currently available only in the US, where it has already been tested.