Antisemitism, racism and white supremacist material in podcasts on Spotify, investigation finds

Technology

A Sky News investigation has found antisemitic, racist and white supremacist material in podcasts on one of the most popular streaming services, Spotify.

The company said it does not allow hate content on its platform.

But we found podcasts totalling several days’ worth of listening promoting extreme views such as scientific racism, Holocaust denial and far-right antisemitic conspiracy theories.

And while some of the most shocking material was buried inside hours-long episodes, in some cases, explicit slurs could be found in episode titles and descriptions while album artwork displayed imagery adopted by white supremacists.

Spotify removed the content after we reported it to the streaming giant.

But many of these podcasts remain online elsewhere, including in largely unmoderated directories like Google Podcasts.

More on Data And Forensics

Google did not respond to our request for comment.

And experts are concerned that the “readily accessible” nature of this material could lure people towards extremism.

Content warning: this article includes references to racist, antisemitic and white supremacist language and ideas

Spotify is one of the biggest podcast streaming platforms, with over 3 million podcasts available.
Image:
Spotify is one of the biggest podcast streaming platforms, with over three million podcasts

One of the first results returned on Spotify when searching for the phrase “Kalergi Plan” directed us to a series which, at the time, had 76 episodes listed on the platform.

The so-called “Kalergi Plan” is a far-right antisemitic conspiracy theory which alleges that Jewish elites are behind a deliberate plan to erase the white European race by promoting mass immigration.

We have chosen not to name any of the podcast series mentioned in this article to avoid publicising their content.

In one episode, the speaker explicitly promotes the Kalergi Plan.

He claims that the European elite has been “replaced” by a “new urban nobility” made up of Jewish elites.

The nine-minute monologue ends with an explicit call to violence against Jewish people.

Another episode by the same creator advances the racist and unfounded idea that white people are biologically superior to people of colour.

“There is something about [white men] that makes us privileged, it’s in our blood,” he says.

He promotes this view, unchallenged, for 13 minutes. The monologue is littered with dehumanising language and makes comparisons that are too offensive to be included in this article.

This flag depicts one of the Norse God Odin's Ravens. It's an image that has been adopted by some white supremacists.
Image:
This flag from Norse mythology has been adopted by some white supremacists

The album artwork for the series depicts the raven flag – a symbol originally found in Norse mythology but one that has been appropriated by some white supremacists in recent years.

We showed our findings to Maurice Mcleod of Race on the Agenda, a social research charity focusing on issues impacting ethnic minorities in the UK.

“This is incredibly dangerous,” he told Sky News.

“Early this May we had the highest [monthly] number of reported incidents of antisemitism and, in the year to March, we had 115,000 reported incidents of hate crime. Now that’s just what’s reported, which is always only the tip of the iceberg.”

“It feels like it’s normalising this sort of thing if you can go on Spotify and listen to Adele and then you can listen to this stuff right next to it.” he said.

The Kalergi Plan is a variation of the white nationalist Great Replacement conspiracy theory.

Jacob Davey, head of research and policy of far right and hate movements at the Institute for Strategic Dialogue, said it was a belief that had been steadily increasing in popularity over the past decade.

“It’s gone from what really was quite a fringe talking point among a few European extremists to the bread and butter discussion of extremists globally,” he told Sky News.

But these ideas do not exist in an online vacuum, he said.

A policeman walks by a memorial to those killed in a terror attack by a white supremacist in Christchurch, New Zealand in 2019. REUTERS/Edgar Su
Image:
51 people were killed in the 2019 Christchurch attack

“In 2019, when an individual committed a really horrific terror attack in Christchurch, New Zealand, he was directly doing that in response to this theory,” said Mr Davey

“And after that attack, there were a number of others throughout 2019. The spread of these ideas can really have a noticeable impact in compelling people to go on and commit atrocious violence.”

This is just one of the series we came across.

Another one, hosted by US-based alt-right creators, uses racist slurs and white supremacist symbols in the episode titles and descriptions.

The hosts casually and openly promote a range of antisemitic and racist beliefs and theories including Holocaust denial and scientific racism.

A third series from a different creator included episodes discussing what they refer to as the “beauty” of white supremacy, as well as readings of essays and books by prominent figures of the Nazi Party, including Adolf Hitler and Joseph Goebbels.

The creator often used the episode description box on Spotify to advertise videos shared on other platforms. One link directs users to a video of a reading of what it calls “Dylann Roof’s insightful manifesto”.

Dylann Roof became the first person in America to be sentenced to death for a federal hate crime in 2017. Pic: REUTERS/POOL/File Photo/File Photo
Image:
Dylann Roof, who shot dead nine black churchgoers, became the first person in the US to be sentenced to death for a federal hate crime in 2017

Other episode descriptions link to a Telegram channel that has a swastika as its icon.

These three series amount to almost 150 hours of content.

In response to our findings, a Spotify spokesperson said: “Spotify prohibits content on our platform which expressly and principally advocates or incites hatred or violence against a group or individual based on characteristics, including, race, religion, gender identity, sex, ethnicity, nationality, sexual orientation, veteran status, or disability.

“The content in question has been removed for violating our Hate Content policy.”

Spotify removed nearly 150 hours of content after Sky News reported it to them.
Image:
Spotify removed nearly 150 hours of content after Sky News reported it to the streaming service

The platform does allow users to report material that violates their content guidelines. The company also said it is developing new monitoring technology to identify material that has been flagged as hate content on some international registers.

But what is currently being done to moderate its podcasting platform beyond responding to user reports is not public knowledge.

The sheer volume of content online means that technology companies require algorithms as well as people to moderate their platforms.

And while technology capable of detecting hate speech in audio is being developed, it’s not yet being widely deployed.

“One of the problems is that it takes a lot more memory to store long audio files. The other problem is that it’s messy – you can have multiple speakers and fast paced dialogue,” said Hannah Kirk, AI researcher at the Oxford Internet Institute and The Turing Institute.

“There’s also tonnes of extra linguistic cues in audio: the tone, the pitch of voice, even awkward silences or laughter. And that’s a problem because we don’t yet have the technology to accurately encode those kinds of extra linguistic signals,” she told Sky News.

Ms Kirk said it is possible that companies like Spotify are hitting resource or technology constraints that mean they are not able to moderate their audio content at scale.

But, she said, the option is available for companies to transcribe audio content and run it through text processing models trained to detect hate, which are far more advanced.

We also found some of the same series on Google Podcasts.

Google’s podcasting arm operates as a directory rather than a platform, meaning that it does not host content on its own server and instead collates podcast feeds that it automatically scrapes from the internet.

The company has received criticism before for allowing users to access extreme and misleading content on its interface. It’s one of the few remaining places users can still find infamous conspiracy theorist Alex Jones’s podcast.

We reported our findings to Google but it did not respond. The series we flagged remains on its platform.

A spokesperson for the company previously told the New York Times that it “does not want to limit what people can find” and that it only removes content in rare circumstances.

But experts are concerned that the accessibility of extreme material on these popular platforms could lead people into becoming radicalised.

“If you think about platforms like Facebook and Twitter, they’ve had a lot of issues with this and they still have issues,” said Mr Davey.

“And although they’re not doing it perfectly, they are taking an active effort to try and remove that sort of content from their platforms.

“The fact that it’s so easy to access on podcasting platforms suggests there’s a major gap and that these platforms aren’t taking these issues seriously.

“To me, that says that they have a real laissez faire attitude towards extremist material and the spread of hateful ideology.”

The Data and Forensics team is a multi-skilled unit dedicated to providing transparent journalism from Sky News. We gather, analyse and visualise data to tell data-driven stories. We combine traditional reporting skills with advanced analysis of satellite images, social media and other open source information. Through multimedia storytelling we aim to better explain the world while also showing how our journalism is done.

Why data journalism matters to Sky News

Products You May Like