

What exactly does Mozilla mean by a YouTube “regret”? It says this is a crowdsourced concept based on users self-reporting bad experiences on YouTube, so it’s a subjective measure. These reports were generated between July 2020 and May 2021. The crowdsourced study - which Mozilla bills as the largest-ever into YouTube’s recommender algorithm - drew on data from more than 37,000 YouTube users who installed the extension, although it was a subset of 1,162 volunteers - from 91 countries - who submitted reports that flagged 3,362 regrettable videos that the report draws on directly.


Pandemic-related regrets were also especially prevalent in non-English speaking countries, per the report - a worrying detail to read in the middle of an ongoing global health crisis. (And none of the three can be classed as minor international markets.) So a clear fail.Ī very notable finding was that regrettable content appears to be a greater problem for YouTube users in non-English speaking countries: Mozilla found YouTube regrets were 60% higher in countries without English as a primary language - with Brazil, Germany and France generating what the report said were “particularly high” levels of regretful YouTubing. Mozilla even found “several” instances when the recommender algorithmic put content in front of users that violated YouTube’s own community guidelines and/or was unrelated to the previous video watched. The research also found that recommended videos were 40% more likely to be reported by the volunteers than videos they’d searched for themselves. The crowdsourced volunteers whose data fed Mozilla’s research reported a wide variety of “regrets,” including videos spreading COVID-19 fear-mongering, political misinformation and “wildly inappropriate” children’s cartoons, per the report - with the most frequently reported content categories being misinformation, violent/graphic content, hate speech and spam/scams.Ī substantial majority (71%) of the regret reports came from videos that had been recommended by YouTube’s algorithm itself, underscoring the AI’s starring role in pushing junk into people’s eyeballs. (Or, well, “dysfunctioning” as the case may be.)

The tool can generate a report that includes details of the videos the user had been recommended, as well as earlier video views, to help build up a picture of how YouTube’s recommender system was functioning. To gather data on specific recommendations being made made to YouTube users - information that Google does not routinely make available to external researchers - Mozilla took a crowdsourced approach, via a browser extension (called RegretsReporter) that lets users self-report YouTube videos they “regret” watching.
YOUTUBE A COMPLICATED SONG CRACK
The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight - via the convenient shield of “commercial secrecy.”īut regulation that could help crack open proprietary AI blackboxes is now on the cards - at least in Europe.Įurope lays out its plan to reboot digital rules and tame tech giants That YouTube’s AI is still - per Mozilla’s study - behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform. New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of “bottom-feeding”/low-grade/divisive/disinforming content - stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation - which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic a side effect of the platform’s rapacious appetite to harvest views to serve ads.
