John Letzing, Digital Editor for Strategic Intelligence at the World Economic Forum considers the growing problem of online radicalization, what should be done about it, and by whom:
What do TV show predictions have to do with far-right conspiracy theories? On TikTok, an app popular with children younger than 14, there’s a very short path from one to the other.
According to a pair of reports published recently by a media watchdog, TikTok can swiftly channel young users from relatively benign interests to more troubling topics – and even into the arms of some of the extremist movements involved the deadly attack on the U.S. Capitol in January.
A steadily-building body of evidence that becoming radicalized by way of the internet is a very real and dangerous phenomenon appears to be reaching a critical mass. So, what should be done about it?
Twitter CEO Jack Dorsey acknowledged during a congressional hearing last month that the social media service played a role in the white supremacist attack on the U.S. Capitol. Dorsey told lawmakers that Twitter is working to address extremism and misinformation.
Earlier this month, YouTube made its first public disclosure of the percentage of views coming from videos later removed for rules violations including promoting violent extremism. But it stopped short of sharing what would likely be the “eye-popping” total number of views these videos garner before they disappear.
Social media’s algorithmic tentacles require surprisingly few prompts to pull people into a cascade of xenophobic, racist, anti-Semitic, and religious extremist messaging. The real-world results have piled up, and while some content has only had an indirect impact it’s been no less deadly, like the anti-science propaganda blamed for killing thousands.
According to a report published in December, social media platforms including YouTube helped radicalize the perpetrator of a 2019 terrorist attack on mosques in Christchurch, New Zealand that left 51 people dead. The report noted the assailant’s belief in the “Great Replacement” theory, which holds that white populations are being disempowered and replaced by people of color – a theory also popular among rioters at the U.S. Capitol.
A French writer is credited with popularizing the Great Replacement theory roughly a decade ago. Since then it has been heavily promoted online by groups like Generation Identity (Génération identitaire), which was banned by the French government last month.
In India, Facebook has struggled with its response to violent religious extremists; it refrained from banning them due to a fear of endangering the company’s staff and business prospects. In Australia, a government official recently drew a parallel between ways right-wing extremists there are recruiting online, and methods used by the Islamic State.
(While the Islamic State has suffered defeats in the Middle East, recent efforts to bolster its profile online have involved talk on forums of establishing a new caliphate in Africa.)
In Germany, one study found a direct link between anti-refugee sentiment online and violent attacks. It suggested that a right-wing political party’s social media posts have likely pushed “some potential perpetrators over the edge.”
One way to try to curb online extremism is by deplatforming the most popular and troublesome instigators. However, they can often simply migrate to seedier corners of the internet and bring their followers with them.
Stiffer rules and regulations may therefore be in the works for an industry that’s mostly been left to its own devices. One American lawmaker opened last month’s congressional hearing on social media’s role in promoting extremism by declaring that, “self-regulation has come to the end of its road.”