by varenc on 5/24/24, 1:33 AM
by mwkaufma on 5/24/24, 2:29 AM
by xnx on 5/24/24, 1:54 AM
Google is now getting more user data and feedback than any other LLM tool. Because Google's AI feature uses data from the web, this effectively becomes an information accuracy bug report for all of human knowledge. Teaching an LLM "truth" is extremely difficult, but getting lots of feedback is a big advantage.
by Epskampie on 5/24/24, 7:35 AM
Interesting how Google first decided to "borrow" knowledge from other pages and display it directly on their own site, but now they are also directly blamed if it's wrong.
If it was just a link, people would just think that site was wrong, now they think that Google is wrong.
by deadbolt on 5/24/24, 1:29 AM
"Cracked Tooth Syndrome" lol
I wonder who signed off on moving this shit into production, and just how many promotions they've already received.
by botanical on 5/24/24, 4:07 AM
This is the funny side when a corporation's AI system gets things wrong, but these offered AI "helpers" are detrimental to society. Token generators like LLMs that have been "humanised" need to be heavily regulated before we see loss of human life due to following a friendly "human-AI" advice. Also, same Google that's offering its AI services to weapons systems in Israel that's already killed innocent civilians. We need regulation of corporations.
by tivert on 5/24/24, 5:14 AM
https://x.com/heshiebee/status/1793810016199197097:
> Human: how many rocks should I eat?
> AI: According to UC Berkeley geologists, eating at least one small rock per day is recommended because rocks contain minerals and vitamins that are important for digestive health.
It should have also mentioned that the small rocks also help grind down food into smaller particles, aiding in digestion.
by p3rls on 5/24/24, 11:56 AM
It's incredible how bad google is at manual corrective action. You'd think having a blacklist would be pretty simple. I've seen the same shitty neon-colored wordpresses dominate niches from buying links from the nytimes, latimes etc for decades... You'd think there'd have to be someone from google thinking to themselves, wait something is amiss? The money train brrrrrrs too loud to hear these things.
by hylaride on 5/24/24, 1:43 PM
In all seriousness and fairness, this is par for the course of internet-related health advice...
by supriyo-biswas on 5/24/24, 4:12 AM
I’m seeing a lot of these threads at the same time today, but what’s interesting to me is how to do RAG safely since that’s what’s backing the answers.
Generally people don’t think about eating rocks, so the RAG process picks up on questionable sources.
by OutOfHere on 5/24/24, 2:48 AM
This is why it is important for an LLM to train on high-quality texts, not on random Reddit trolling. That's not to say that all of Reddit is this way; it absolutely isn't, but Google doesn't know the difference.
by flax on 5/24/24, 5:29 PM
What's the problem? I eat salt practically every day.
by h2odragon on 5/24/24, 1:10 AM
Get the FDA to tell people "You're not a bird."
by ChrisArchitect on 5/24/24, 5:08 AM
by yazzku on 5/24/24, 2:00 AM
I like how it references Reddit and Quora, the ultimate repositories of human knowledge.
"Trust me on this one, I got it from Quora."
by esafak on 5/24/24, 1:54 AM
It's interesting that the model is not smart enough to understand satire.
by throwaway5959 on 5/24/24, 1:31 AM
Seems like it’d be trivial for Google to remove the Onion from their round of pretraining.
by MooooonSheep on 5/24/24, 1:23 AM
Is this really make sense?