Twitch appears to be toying with a program that automatically rates streamers based on a handful of factors — including age, suspension history and partnership status — in order to pair them with advertisers. It's called the Brand Safety Score, and it was discovered in Twitch's internal API by cybersecurity student Daylam Tayari, who posted images of the changelog on Twitter.
A Twitch spokesperson stopped short of confirming the existence of the Brand Safety Score to Engadget, but offered the following statement:
"We are exploring ways to improve the experience on Twitch for viewers and creators, including efforts to better match the appropriate ads to the right communities. User privacy is critical on Twitch, and, as we refine this process, we will not pursue plans that compromise that priority. Nothing has launched yet, no personal information was shared, and we will keep our community informed of any updates along the way."
Twitch has added an automatic Brand Safety Score which grades how brand friendly every streamer is based on things like chat behavior, ban history, manual ratings by Twitch staff, games played, age, automod and more (See below).— Daylam 'tayari' Tayari (@tayariCS) March 9, 2021
According to Tayari, the Brand Safety Score rates streamers based on their age (whether they're over 18 or 21), suspension history, relationship with Twitch, partnership status, whether they use automod and at what level, whether a stream is set to mature, and the ESRB ratings of their games. There's also a section to add a manual rating from a Twitch employee.
As described, the Brand Safety Score sounds similar to ad-rating systems already employed by sites like YouTube and Twitter, or even ratings on ridesharing apps. It should help advertisers sort through the sea of streamers, and could affect Twitch's Bounty Board, where advertisers offer specific gigs to a handful of chosen partners and affiliates.
Knowing which metrics Twitch is tracking can help streamers stay at the top of the pile, though there's no guarantee that the company will make any of its rating algorithms public — unless a curious researcher takes another dive.