Yesterday, Google launched an “experimental” Chrome extension called Tune to hide toxic comments on sites like YouTube, Twitter, Facebook and Reddit.
Behind the curtain: Tune is part of a conversational AI research project made by Google Jigsaw — a unit of Alphabet that aims to use technology to make the world safer.
It works by letting you set the volume of toxic comments you see across different sites down to “zen mode” to skip over comments completely, or you can turn it up and see allll of them.
The Product Hunt community had some mixed reactions...
“On one hand we criticize filter bubbles, on the other hand, we build things like this” - Anna
“Useful focus group to see how people pick and choose what they deem as something not worth seeing. Data farm to inform grander goal.” - Christopher
“It's simply an unhealthy approach to information” - Filip
For some, one toxic comment can create fear around posting online or using certain websites altogether (i.e. social media). For others, it's easy to ignore the trolls. That's the idea behind Tune; it gives power back to readers.
But is a censored internet what we really want?
There's already Refined Twitter and Blindfold for controlling Twitter. And there's Vanilla for checking the toxicity of your own tweets. There's also Sour Grapes for hiding negativity on your Facebook Ads.
If anything, what we need right now is a more *transparent* internet. It's worth noting that Google claims that no personal data is stored from the extension.
Our latest model goes beyond accuracy to capture the real-world complexity of human conversation, delivering reliable, source-of-truth audio data. Built on the core strengths of Universal-1, it solves key challenges, recognizing complex data like alphanumerics, proper nouns, and specialized formats. Designed for real-world applications, it enhances workflows and delivers clearer insights. Learn what’s improved and why 72.9% of people choose Universal-2 over the rest.