Artificial intelligence remains an ally in detecting hate speech

Artificial intelligence remains an ally in detecting hate speech

Technology

According to an official Facebook statement, hate speech has decreased 50% thanks to its artificial intelligence (AI)-based detection technology. However, an investigation conducted by The Facebook Files department, formed by the Wall Street Research Journal, commented that engineers and interns at the tech giant reported limitations in its artificial intelligence. The details that the technology cannot adequately detect are first-person videos, usually with messages of racism and xenophobia, and some audio files.

The statistics and data analysis network Statista reported, between 2019 and 2020, daily use of social networks increased from 142 to 145 minutes on average. The Philippines and the United States remain at the top of the list, with 3 hours and 53 minutes and 2 hours and 3 minutes respectively. Also, in Q1 2021, 3.51 billion users were reported on Facebook’s most popular services, including WhatsApp, Messenger and Instagram.

Due to the increase in users, time online, and volume of information, one of the most critical challenges is to have an automated hate speech detection system on Facebook. This was reported by Guy Rosen, the company’s chief identity officer, who added in the statement, “We must ensure that a content is hate speech before removing it.”

There is an important relationship between hate speech and disinformation channels, especially amid the pandemic, where the use of social networks as a source of information and social contact has been predominant. Disinformation content can trigger episodes of hatred towards specific groups of people, reported the United Nations Educational, Scientific and Cultural Organization (UNESCO). Therefore, both must be resolved to foster peace and establish a better environment for social coexistence, where technology is considered an essential ally due to the massive volume of information.

In response to The Facebook Files study, Rosen commented: “we believe that prevalence is the most important metric to know if the platform can combat hate speech.” In this regard, he announced that the use of AI technology on Facebook reduced the appearance of such content, whose current prevalence is 0.05%, equivalent to 5 out of every 10 thousand views. In this sense, the total hate speech on the network has dropped 50% in the last three quarters, said the head of the identity.

Faced with using technology to prevent hate speech and disinformation, some social networks find a middle ground between automatic detections and human supervision. Among the notable cases is Twitter, which said it considers comments, complaints, and all kinds of feedback from its users with the help of the Trust and Safety Council. One of its main challenges is to set clear criteria for noticing dehumanizing content and to include examples in all languages to be detected with the appropriate technology.