by guax on 3/8/25, 1:26 PM with 46 comments
by elicksaur on 3/8/25, 1:52 PM
by Xcelerate on 3/8/25, 2:22 PM
Oh man, oh man, this just gave me a great startup idea. What is the total addressable market of nutcases willing to pay $50 or more to have someone check whether they have cracked some difficult code? Perhaps even with a giant prize attached for cracking it?
by jampekka on 3/8/25, 1:49 PM
by haswell on 3/8/25, 2:40 PM
The confident incorrectness they breed is a problem.
The impact in online social spaces is already increasingly obvious.
by chiph on 3/8/25, 2:25 PM
It has a light inside and if you visit at night, the reverse-cut lettering will be readable on the walkway around it.
by trescenzi on 3/8/25, 1:55 PM
On one hand this is amazing because it increases access to good enough writing and information. On the other hand it raises the chances that people, regardless of educational attainment, are convinced that falsehoods are true.
1: https://arstechnica.com/ai/2025/03/researchers-surprised-to-...
by snowwrestler on 3/8/25, 2:54 PM
I feel like that would be a strong artistic statement about the CIA and intelligence agencies in general. Do people reluctantly work to know every secret because it’s actually necessary for security? Or do some people just want to know every secret, and “security” is the handiest excuse for them to pursue that?
by bell-cot on 3/8/25, 2:06 PM
Especially when the downsides of being wrong are nill.
by IshKebab on 3/8/25, 2:02 PM
...
> Some years ago, Sanborn began charging $50 to review solutions, providing a speed bump to filter out wild guesses and nut cases.
Yeah I suspect he isn't that ticked off. I'm happy to take over reviewing solutions if he likes!
by spacecadet on 3/8/25, 1:51 PM
by picafrost on 3/8/25, 2:27 PM
The valuation-perception-driven hyperbole around these Dunning-Kruger machines does not help the average person trying to bat above their level.
by kewho on 3/8/25, 3:00 PM
I liked the snark from gizmodos report:
> > The smugness is, frankly, inexplicable. Even if they did successfully crack Sanborn’s code using AI (which, for the record, Sanborn says they haven’t even gotten close), what is it about asking a machine to do the work for you that generates such self-satisfaction?
Hear, hear.
https://gizmodo.com/chatbots-have-convinced-idiots-that-they...