ToxMod Looks To Remove Toxic Speech From Gaming

This sounds interesting.

Modulate Inc, a technology company using artificial intelligence to improve online voice chat, introduced ToxMod, the world’s first voice-native moderation service. ToxMod transcends the idea of merely transcribing what’s said and leverages voice as a medium to perform moderation more effectively than text moderation could do on its own.

“ToxMod helps game developers detect toxic, disruptive, or otherwise problematic speech in real-time, and respond immediately and appropriately,” said Mike Pappas, Co-Founder and CEO of Modulate. “Developers now can deal with offensive speech by not just shutting down whole conversations, but also taking more nuanced actions like blocking individual words such as racial slurs or identifying information like a phone number.”

ToxMod uses sophisticated machine learning models to understand not just what each player is saying, but how they are saying it – including their emotion, volume, prosody, and more. This allows for a much more nuanced and accurate prediction of whether a specific statement is intended as toxic or disruptive. Moderators can now identify bad actors and preemptively resolve a problem before it grows into something even more severe. By default, ToxMod processes all data on each player’s device in real-time, preserving player privacy, and only sends audio off-device to a moderation team when toxicity has been detected.

“Since ToxMod can differentiate something like honest frustration expressed in a toxic way from malicious intent, it can also advise matchmaking or reputation algorithms to improve the player experience,” said Carter Huffman, Co-Founder and CTO of Modulate. “Additionally, each game’s private ToxMod instances learn over time about their community’s specifics, on top of ToxMod’s universal core algorithms which evolve and improve automatically behind the scenes.”

For more information on ToxMod and the Modulate Platform, please visit