Hateful Usernames in Online Multiplayer Games

Key Takeaways

·  Despite game companies having policies prohibiting hate, researchers at ADL Center for Technology and Society were easily able to find usernames in five categories of hate (antisemitism, misogyny, racism, anti-LGBTQ+ and ableism) across five popular online multiplayer games (League of Legends, PUBG, Fortnite, Overwatch 2, and Call of Duty).

·  Our researchers also found many examples of obvious white supremacist terms in usernames across the online multiplayer games that were investigated. This underscores the fact that none of the game companies investigated as part of this project have policies prohibiting extremist ideologies. 

·  Usernames are a basic part of any online video game experience. When a player signs up to play an online multiplayer game, they usually have to enter an email address, create a password, and then create a username.

·  Usernames are also one of the easiest pieces of content for game companies to moderate, as usernames are evaluated before they are created, are persistent in a game space and are tied to an individual user.

·  Additionally, usernames are one of the few elements of online multiplayer games that external researchers can investigate through third-party tools and testing the registration of different terms. This highlights the need for game companies to share data with researchers to allow  external evaluations of their stated efforts.

·  Our results demonstrate that the game industry is not invested enough in the simplest solutions to address hate and extremism in online games.

Out of the five games examined, Overwatch 2 generated the fewest results when searching for offensive usernames. These results could indicate an effective policy against offensive usernames that could be adapted to other games.

Recommendations

1. Assign more resources to understaffed and overwhelmed trust and safety teams in game companies. Teams are bogged down by institutional challenges, explaining the value of content moderation to skeptical executives and securing bigger budgets to hire more staff and expand their work.

2. Make content moderation a priority in the creation and design of a game. Trust and safety experts say content moderation should be central from a game’s conception to its discontinuation.

3. Focus content moderation on the toxic 1%. Networks, not individuals, spread toxicity. Game companies should identify clusters of users who disproportionately exhibit bad behavior instead of trying to catch and punish every rule-breaking individual.  

4. Build community resilience. Positive content moderation tools work. Use social engineering strategies such as endorsement systems to incentivize positive play. 

5. Use player reform strategies. Most players respond better to warnings than punitive measures.

6. Provide consistent feedback. When a player sends a report, send an automated thank you message. When a determination is made, tell the reporting player what action was taken. This not only shows players that the team is listening, but it also models positive behavior.  

7. Avoid jargon and legalese in policy guidelines. These documents should be concise and easy for players to read. Every game should have a Code of Conduct and Terms of Service.

The games industry has reached an inflection point, with games acting as powerful vectors of harassment and radicalization. Regardless of age, users will likely experience abuse when they play.

Games function as entertainment and sources of community for millions of people. If the industry continues to deprioritize content moderation, it will send a clear message to users, especially marginalized groups, that games are not safe, welcoming spaces for all. We hope this report serves as constructive criticism for an industry that figures prominently in the American social and cultural landscape.