Extremists are constantly adapting and finding creative ways to reach new audiences – and new ways to evade content moderators. Their latest efforts are evident on TikTok, the Chinese social media app that allows users to create and share short videos. In less than four years, TikTok has racked up more than 800 million users, and its popularity is fast approaching that of social media giants like Facebook, WhatsApp and Instagram Each TikTok video has the potential to reach hundreds of millions of users, many of whom are children.
Despite TikTok’s efforts to moderate and/or remove extremist content via community guidelines that cover a wide range of problematic content, from hate speech to discussions of self-harm, a cursory review of the platform by ADL’s Center on Extremism found that white supremacists and antisemites are using a range of methods on the platform to recruit new adherents and share hateful content. The following is an overview of some of the techniques extremists use to evade detection and amplify their hateful messages. It does not reflect an audit of all TikTok content.
ADL has alerted TikTok to the range of examples of hate speech on their platform, including the examples that follow, and in recent weeks ADL’s Center on Technology and Society and Center on Extremism have been working closely with TikTok’s Trust and Safety team to help them identify and address the hate on their platform. TikTok has committed to working closely with ADL to address these issues going forward.