A new report from the EU Agency for Fundamental Rights (FRA) has warned that abusive comments, harassment and incitement to violence are easily slipping through the content moderation tools being used by online platforms.

The study found that most online hate targets women, but people of African descent, Roma people and Jewish people are also affected.

The report examined four social media platforms, Reddit, Telegram, X (formerly Twitter) and YouTube.

The FRA said it was not able to access data from Facebook and Instagram for the research.

It focused on online activity in Bulgaria, Germany, Italy and Sweden between January and June 2022, with researchers collecting almost 350,000 posts and comments based on specific key words.

'Lack of understanding' of hate speech

The report found that a lack of access to platforms' data and a lack of understanding of what constitutes hate speech are hampering efforts to tackle online hate, with the research concluding that there is no commonly agreed definition of online hate speech.

The study uncovered "widespread online hate" and found that out of 1,500 posts already assessed by content moderation tools, more than half of them were still considered hateful by human coders.

It found women are the main targets of online hate across all researched platforms and countries, with most hate speech towards women including abusive language, harassment and incitement to sexual violence.

According to the study, people of African descent, Roma people and Jewish people are most often targets of negative stereotyping.

It found that almost half of all hateful posts were direct harassment.

Report's recommendations

The FRA said that to prevent online hate, platforms should pay particular attention to protected characteristics such as gender and ethnicity in their content moderation and monitoring efforts.

"Very large online platforms, such as X or YouTube, should include misogyny in their risk assessment and mitigation measures under the Digital Services Act (DSA)," the report found.

Researchers said that the EU and national regulators should provide more guidance on identifying illegal online hate, adding that the European Commission and national governments should create and fund a network of trusted flaggers, involving civil society.

"The police, content moderators and flaggers should be properly trained, to ensure that platforms do not miss or over-remove content," the FRA said.

It added that providers and users of automated content moderation tools should test their technology for bias to protect people from discrimination.

"The sheer volume of hate we identified on social media clearly shows that the EU, its member states, and online platforms can step up their efforts to create a safer online space for all, in respect for human rights including freedom of expression," said FRA Director Michael O’Flaherty.

"It is unacceptable to attack people online just because of their gender, skin colour or religion," Mr O'Flaherty said.