Since 2019, ADL Center on Technology and Society (CTS) has conducted an annual survey of hate, harassment, and extremism in online multiplayer games.
Our 2022 survey found that 86% of adults and 66% of teens experience hate and harassment in online games. This has been trending upward every year for the last four years. Additionally, 20% of adults have been exposed to white supremacist ideologies in online games, a deeply concerning spike from 8% in 2021.
In March of this year, CTS published a report analyzing the policies of 12 games, as well as trust and safety priorities of the industry, and found the companies were still falling short. As a result, CTS will be doing deeper dives into how hate manifests as part of different in-game elements of online games. For the first investigation, this report focuses on hateful usernames, which should be the easiest content for companies to moderate.
Usernames are a basic part of any online experience. On social media, users must choose a username for their accounts that displays whenever they post or reply to anyone. This name becomes an identifier unique to that person and represents them.
Online video games also have usernames to identify each unique player. When a player signs up to play an online multiplayer game such as Fortnite, they usually have to enter an email address, create a password, and then create a username. For the most part, players are free to create any username they can imagine unless that exact username is already taken.
These names are then displayed alongside the player in-game chats and usually above their player avatar in the game’s world.
However, offensive usernames are a tool some use to create a hateful and hostile gaming environment. Such names include insults, ethnic slurs, swear words, and other offensive terms.
Academic research on Reddit and the online multiplayer game League of Legends found that offensive usernames correlated with antisocial behavior toward other users and players.
Filtering offensive terms from usernames is one of the simplest steps game companies can take to encourage safer online communities. These systems are entirely text-based and simpler to moderate than voice or in-game text chats.
Many game companies have claimed they are addressing the issue of hateful usernames in their online games. The game companies studied as part of this report responded in detail to a December 2022 letter from Representative Lori Trahan of Massachusetts’ 3rd Congressional District and a coalition of lawmakers that asked game publishers what measures they take to protect players from hate and extremism in online games.
Riot Games stated that they use “filtering technology and name checking system [to] prevent extremists from communicating and using screen names linked to extremist ideology.” Innersloth LLC randomizes usernames for players under the age of thirteen.
Other game companies that mentioned using filtering technology in their games (especially for in-game chat) were Microsoft Gaming, Square Enix, Take-Two Interactive Software, Ubisoft, Valve, and Sony Interactive Entertainment.
CTS’ investigation found that while game companies have filtered out some of the most obvious offensive terms and slurs, they have many holes in their filtering systems. CTS found that in addition to more obvious terms, newer terms and more abstract code words referring to white supremacist ideology are also not being caught by username filtering.
This report is based on CTS’s examination of registered usernames in five popular games. CTS also tested the username registration systems for two of these popular games.