How Machine Learning Uncovers Hidden Trends in Online Hate

Man working on laptop in dark room.

Short summary

  • Norway values open and respectful debates online, but hate speech disrupts those opportunities.
  • Using machine learning to analyze patterns of anti-Muslim hate speech online shows that the number of hateful posts is growing and are mostly made by a small group of people. 
  • The people who post hate speech don’t often stick around for a long time. This means engaging with them and getting them to quit could be an effective tool alongside deleting posts and ignoring trolls.  

Public debate in Norway increasingly plays out online. Social media platforms are places where people share news, argue politics, and express their views about religion and identity.

But they are also places where hate speech can spread quickly.

OsloMet senior researcher Yuri Kasahara leads one of the first large-scale efforts in Norway to use machine learning to analyze anti-Muslim hate speech online. His work with the Norwegian Directorate for Children, Youth, and Family Affairs (Bufdir) examined 12 years of public comments from Facebook and Twitter (now X) from 2010 to 2022.

The results offer a rare, data-driven look at how hate speech evolves, who produces it, and what it means for a society that prides itself on open and respectful debate.

The machine learning shift

Machine learning is a kind of AI that can be trained to recognize and categorize text. It is like the generative AI that powers ChatGPT, but machine learning is designed to find interesting points within a sea of data rather than spit out a generic answer.

Kasahara brings this machine learning to bear against social sciences problems. Over the last several years, he has developed a model that identifies trends in anti-Muslim hate speech on social media platforms to better understand, and fight, this disturbing trend.

Studying hate speech can be painstaking work. Researchers would manually collect posts, read each one, classify them by topic, and then calculate statistics.

“Before, you would get maybe a few thousand comments on Facebook or Twitter and then classify manually and then calculate some statistics,” Kasahara explains. “But big data and machine learning give us a more encompassing overview of how hate speech has been evolving in Norwegian social media against, in this case, Muslim minorities.”

They were able to collect around one million public comments from Facebook and Twitter. Those companies’ policies have since changed and similar projects are more expensive and difficult now.

A miserable process

Machine learning made this possible, but few models existed for analyzing Norwegian-language hate speech. Norwegian contains slang, dialects, and culturally specific phrases that don’t translate well into English.

So Kasara first had to build the model. “A colleague and I went through almost 4,000 comments, classifying manually, reading a lot of crap, to train a model capturing these nuances.”

He says it was a miserable process, but necessary. Once trained, the model could process millions of posts.

When Kasahara and his colleagues looked at these replies, retweets, and comment threads to news articles posted to social media, they could trace how discussions unfolded over time and measure whether conversations became more toxic as they grew.

A country that values civil debate

“Norway is unique case. They’re very worried about the quality of public discourse and allowing healthy public debate to take place,” Kasahara says. “I think that's one of the beauties of Norwegian society; they really set a high value on people being able to hold debates in a civilized, polite way.” 

The anti-Muslim hate speech that pops up every time there is a major news story - stereotypes that portray Muslims as repressive, linked to terrorism, or “taking over Europe” - is antithetical to this view. Understanding what it is and how it spreads is therefore important to a free Norwegian society.

Hate speech is growing

Kasahara found that the share of hateful comments within conversations has risen from roughly 1% to about 4% percent over the 12-year study period. This is a small number, but it has a large impact and the total volume of hateful comments has increased.

But measuring how these comments change is complicated. Machine learning models trained on data from a short timespan may miss earlier or later forms of hate that use different language or coded expressions.

There were other difficulties, the study focused on comments under news articles and did not examine attacks against specific public officials, such as Muslim politicians, who often face even higher levels of abuse. And because of strict privacy laws like GDPR, the researchers could not collect demographic data about users.

Still, the results give a valuable insight into how hate speech propagates and who is making it.

A small group of haters

One encouraging finding was that a large share of anti-Muslim hate speech was made by just a small group of individuals.

While bots, foreign agitators, and coordinated disinformation campaigns are a problem, Kasahara says that the top contributors appear to be real individuals. Importantly though, over the 12 years of the study, the people generating the most hate changed.

“There is this dynamic,” Kasahara says, “some people very strongly engage in hating someone or some group of people. But then they have an epiphany and decide to change their lives and say ‘okay, no, I'm going to quit this process’.”

Portrait of senior researcher Yuri Kasahara.

OsloMet senior researcher Yuri Kasahara leads one of the first large-scale efforts in Norway to use machine learning to analyze anti-Muslim hate speech online. Photo: Joachim Engelstad / OsloMet

This means that hate speech online is a structural phenomenon, but also deeply personal and often temporary one. Some individuals appear intensely engaged for a period, then stop because they change their mind, face consequences, or simply lose interest.

This has implications for how to respond to hate speech. Norway has laws addressing hate speech, platforms can take down offenders, and flooding the responses can dilute toxic posts, but Kasahara suggests that control and censorship alone are not enough.

“There are ways to deal with hate speech that are not only based on control and command strategies,” he says. While the general rule of “don’t feed the trolls” holds, he says “I also think we need to actually engage with these individuals and try to have a dialogue with them, get them to move on.”

Lessons from hate

Kasahara’s original project has concluded, but he and his and colleagues are now applying similar machine learning tools to analyze discussions within Norway’s municipal youth councils. The aim is to better understand what young people support or criticize in local political debates.

Analyzing hate speech with machine learning cannot end hate speech, but it can show patterns. Tracking changes over time and developing an understanding of the dynamics of hate speech gives platforms and policymakers the tools they need to begin addressing it. For a country like Norway that is committed to open and respectful public discourse, that knowledge is essential.

Reference 

Kasahara, Yuri & Schroeder, Daniel Thilo & Yazidi, Anis & Lind, Pedro G. (2026). The Dynamics of Hate Speech: Assessing Anti-Muslim Hate Speech in Norwegian Social Media (journals.sagepub.com).

Contact

Loading ...

Featured research

Person using a laptop with Google search open in the browser, sitting at a wooden table with a cup of coffee and potted plants in the background
Aphasia makes online searching hard – A democratic problem

When language fails, using search engines to find information becomes almost impossible.

Hands holding test tubes under microscope.
How artificial intelligence can revolutionise assisted fertilisation

AI could make assisted reproductive technology significantly more effective, but professionals are often sceptical. There is a need to build greater trust among them, says researcher.

Portrait of elderly lady in bed.
Using artificial intelligence to detect and fight dementia

Dementia affects millions of people each year. It’s a problem that most of us will encounter in our lives and it does not yet have any cure or even effective early detection methods.

Published: 17/03/2026
Last updated: 17/03/2026
Text: Matthew Davidson
Photo: Johner Images / Plattform / NTB