As the games industry fails to confront rising toxicity in online gaming, ADL (Anti-Defamation League) today released a new report revealing structural obstacles that prevent the industry from moderating content. The research identifies shortcomings in policies and in the use of trust and safety teams and defines key steps companies can take to offset the threat of spreading toxicity, such as increasing focus on the small number of repeat offenders, and applying content moderation as a principle across the production process.
“The scale of harassment, antisemitism, racism and misogyny on gaming platforms – where so many young people seek entertainment and community – is unacceptable.” said Jonathan Greenblatt, ADL CEO and National Director. “Recent changes at social media companies show us the catastrophic results when the tech industry abdicates its moderation responsibilities. Gaming companies must learn from those mistakes and seriously step up their commitments to creating safe platforms.”
ADL Center for Technology and Society researchers found several persistent issues, including:
- Trust and safety teams struggle with limited capacity, small budgets and unmanageable workloads. They are caught in a catch-22: For company executives to invest in content moderation, there must be data to document the problem. But trust and safety teams don’t have the resources they need to collect such data.
- Game companies have inconsistent priorities when it comes to content moderation.
- Focusing moderation on individuals results in treating those who make occasional mistakes (the vast majority of players) the same as repeat offenders who have nefarious intent.
To address these challenges, ADL recommends that tech companies:
- Focus on creating a separate category to curb the behavior of the repeat offenders who are disproportionately and persistently toxic. Game companies should identify clusters of users who disproportionately exhibit bad behavior instead of trying to catch and punish every rule-breaking individual.
- Try player reform strategies, before punitive measures like banning, to warn players that their actions will have consequences.
- Implement consistent feedback to not only warn against toxic behavior, but also to support and reward good behavior.
- Build community resilience. Positive content moderation tools work. Use social engineering strategies such as endorsement systems to incentivize positive play.
- Invest more resources in understaffed and overwhelmed trust and safety teams.
- Make content moderation a priority in the creation and design of a game – this should be central from a game’s conception to its discontinuation.
“While most users come to online games for entertainment and community, they’re far too often exposed to hate and extremism. Companies must implement consistent moderation policies and fully empower trust and safety teams,” said Yael Eisenstat, ADL Vice President, Center for Technology and Society. “Despite economic headwinds, we urge the industry to see content moderation as a vital business decision to more sustainably make their products less toxic for all users.”
Since 2019, ADL has conducted an annual survey of hate, harassment and extremism in online multiplayer games. Our 2022 survey found that hate continues unabated: 77 percent of adults experienced severe harassment — including physical threats, stalking and sustained harassment.