from Hacker News

AI Thinks It Cracked Kryptos. The Artist Behind It Says No Chance

by guax on 3/8/25, 1:26 PM with 46 comments

  • by elicksaur on 3/8/25, 1:52 PM

    When I read stories about people 10x’ing their productivity with AI, I interpret them as having the same vibe as these “solvers”.
  • by Xcelerate on 3/8/25, 2:22 PM

    > Some years ago, Sanborn began charging $50 to review solutions, providing a speed bump to filter out wild guesses and nut cases

    Oh man, oh man, this just gave me a great startup idea. What is the total addressable market of nutcases willing to pay $50 or more to have someone check whether they have cracked some difficult code? Perhaps even with a giant prize attached for cracking it?

  • by jampekka on 3/8/25, 1:49 PM

  • by haswell on 3/8/25, 2:40 PM

    This story summarizes one of my biggest concerns/frustrations with the rise of these LLM tools.

    The confident incorrectness they breed is a problem.

    The impact in online social spaces is already increasingly obvious.

  • by chiph on 3/8/25, 2:25 PM

    If you'd like to see a similar work by Jim Sanborn, Cyrillic Projector is at the University of North Carolina Charlotte behind the Fretwell Building. Way easier to see in person than CIA headquarters!

    It has a light inside and if you visit at night, the reverse-cut lettering will be readable on the walkway around it.

    https://maps.app.goo.gl/tFbx6kBkMgSMDGGY8

  • by trescenzi on 3/8/25, 1:55 PM

    This is one of the more major risks of chatbot style generative AI. People are very open to well argued answers to problems even if the answer is wrong. I was reading this article about how those with less educational attainment seem to use chatbots more[1].

    On one hand this is amazing because it increases access to good enough writing and information. On the other hand it raises the chances that people, regardless of educational attainment, are convinced that falsehoods are true.

    1: https://arstechnica.com/ai/2025/03/researchers-surprised-to-...

  • by snowwrestler on 3/8/25, 2:54 PM

    I love entertaining the possibility that the 4th panel of Kryptos is not actually solvable, and that everyone is spending tons of time and energy trying to know secret information has no practical use, and doesn’t even exist.

    I feel like that would be a strong artistic statement about the CIA and intelligence agencies in general. Do people reluctantly work to know every secret because it’s actually necessary for security? Or do some people just want to know every secret, and “security” is the handiest excuse for them to pursue that?

  • by bell-cot on 3/8/25, 2:06 PM

    But it feels SO good to believe that you (with your AI sidekick) are some sort of uber-awesome genius!

    Especially when the downsides of being wrong are nill.

  • by IshKebab on 3/8/25, 2:02 PM

    > And Sanborn is getting ticked off

    ...

    > Some years ago, Sanborn began charging $50 to review solutions, providing a speed bump to filter out wild guesses and nut cases.

    Yeah I suspect he isn't that ticked off. I'm happy to take over reviewing solutions if he likes!

  • by spacecadet on 3/8/25, 1:51 PM

    As an artist, I would just setup a small/local AI to auto respond to the bullshit emails and sit back and collect the $50. That would be art in itself.
  • by picafrost on 3/8/25, 2:27 PM

    I am reminded of a brief quip in Neal Stephenson's The Diamond Age where a character comments on remembering a time before "AI" had been correctly rebranded to "PI": pseudo-intelligence.

    The valuation-perception-driven hyperbole around these Dunning-Kruger machines does not help the average person trying to bat above their level.

  • by kewho on 3/8/25, 3:00 PM

    > What took 35 years and even the NSA with all their resources could not do I was able to do in only 3 hours before I even had my morning coffee,” it began, before the writer showed Sanborn what they believed to be the cosmically elusive solution. “History’s rewritten,” wrote the submitter. “no errors 100% cracked.”

    I liked the snark from gizmodos report:

    > > The smugness is, frankly, inexplicable. Even if they did successfully crack Sanborn’s code using AI (which, for the record, Sanborn says they haven’t even gotten close), what is it about asking a machine to do the work for you that generates such self-satisfaction?

    Hear, hear.

    https://gizmodo.com/chatbots-have-convinced-idiots-that-they...