from Hacker News

Ask HN: Filtering AI slop in the browser before it hits our eyeballs?

by merksittich on 4/29/25, 3:26 PM with 3 comments

Given the recent explosion of easily-generated synthetic images, particularly low-effort AI visuals on social platforms (e.g., LinkedIn), I'm curious why no mainstream browser-level mechanism exists to filter such content before it meets our eyes.

There are niche browser add-ons that visually classify images to detect AI-generated images. However, none appear to have gained significant traction and none appear to be built for the purpose of hiding visual AI slop. Search providers like Kagi address AI content filtering specifically in image search, which is helpful but not useful for general browsing.

With standardized provenance metadata already available (e.g., via C2PA), why hasn't browser-level filtering of AI-generated images become a mainstream feature, in view of the obvious use-case to improve SNR by hiding or replacing AI-generated images until explicitly requested? Even if it may be gamed as long as there are no hard-to-strip pixel-level watermarks in the images, I presume that most content "creators" would not bother. What am I missing?

  • by JohnFen on 4/29/25, 5:02 PM

    > There are niche browser add-ons that visually classify images to detect AI-generated images.

    Do any of these work well enough to be useful? As in, do they have both a high rate of detection and a low false-positive rate?

  • by carlosjobim on 5/2/25, 9:05 PM

    How do I make bad people treat me better? You don't and you can't. What you should do instead is give your time and your energy to good people.

    The same with the digital world. You are looking in the wrong places if you're being exposed AI slop.

  • by smidgeon on 4/29/25, 3:57 PM

    I'd install 'Stop the slop' like a shot