Ubisoft and Riot Games are launching Zero Harm in Comms, a research project focused on AI-based solutions to toxicity during player interactions in multiplayer games.
The program aims to expand the abilities of their AI tech to halt hostile, bigoted, or negative interactions between players. It will attempt to build a cross-industry database and labeling system, which will then be used to train AI moderation tools to preemptively find and stop bad behavior. This player data will be anonymized, according to Ubisoft’s press release, as part of the program effort to ensure privacy and conduct this research ethically.
The project is the brain child of Yves Jacquier, the executive director of Ubisoft La Forge, and Wesley Kerr, head of technology research at Riot. Jacquier said in a press release that, “We believe that, by coming together as an industry, we will be able to tackle this issue more effectively.” Kerr emphasized the project’s potential to affect spaces outside of games saying, “Disruptive behavior isn’t a problem that is unique to games — every company that has an online social platform is working to address this challenging space.”Venir de Tragamonedas Gratis Online
The multiplayer focus of Riot’s catalog in addition to the wide swath of Ubisoft titles is intended to create a wide range of potential players and cases. While an AI cannot possibly detect every case of bad behavior, the tech will, in theory, be able to detect more issues with a higher rate of accuracy. This is the first part of an ongoing research project, which started roughly six months ago. No matter the outcome of the research, Riot and Ubisoft intend to share the results of this first phase with the rest of the industry next year.
Both Ubisoft and Riot Games have faced accusations of toxic and mismanaged workplaces. Earlier this year, Riot agreed to $100 million settlement and three years of independent oversight. In 2021, Ubisoft CEO Yves Guillemot addressed workplace changes in the wake of the allegations in an open letter.