Twitch Launches New Tool To Combat Chat Harassment


Twitch has released a new tool that aims to combat harassment in chat, keeping channels a little safer and streamers quieter after the recent wave of raids spreading hatred against minorities.

The platform announced yesterday (30) a new system that, using machine learning, is able to identify users who are somehow circumventing the banning system.

In a post on its official blog, Twitch explains a little about how the new tool works, emphasizing that users banned from a given channel must remain without access to that environment permanently. According to the text, however, some of these individuals “choose to create new accounts, returning to the chat and continuing with their abusive behavior”.

According to the platform, the new intelligent system is able to detect suspicious activity, helping to “identify these users based on different signs of the account”. Any and all accounts that fall into the “fine mesh” of the new tool now have limited chat interactions, with their messages being visible only to the streamer and his moderation team, who can then take appropriate action.

Twitch makes it clear that because this is a new system and is based on machine learning, the result will not be 100% reliable — especially during the launch period. Anyway, its use is optional and is disabled by default, being necessary to turn on the detection of suspicious activity in the channel settings.

The initiative aims to allow the community to police itself, having at its disposal a new way of barring the interaction of disrespectful and abusive individuals. “You are the experts when it comes to your community, so you must make the final decision on who can participate in it,” the company reaffirms in the post.