from Hacker News

What happens when people don't understand how AI works

by rmason on 6/8/25, 8:25 PM with 350 comments

  • by layer8 on 6/8/25, 8:58 PM

  • by kelseyfrog on 6/8/25, 9:16 PM

    LLMs are divinatory instruments, our era's oracle, minus the incense and theatrics. If we were honest, we'd admit that "artificial intelligence" is just a modern gloss on a very old instinct: to consult a higher-order text generator and search for wisdom in the obscure.

    They tick all the boxes: oblique meaning, a semiotic field, the illusion of hidden knowledge, and a ritual interface. The only reason we don't call it divination is that it's skinned in dark mode UX instead of stars and moons.

    Barthes reminds us that all meaning is in the eye of the reader; words have no essence, only interpretation. When we forget that, we get nonsense like "the chatbot told him he was the messiah," as though language could be blamed for the projection.

    What we're seeing isn't new, just unfamiliar. We used to read bones and cards. Now we read tokens. They look like language, so we treat them like arguments. But they're just as oracular, complex, probabilistic signals we transmute into insight.

    We've unleashed a new form of divination on a culture that doesn't know it's practicing one. That's why everything feels uncanny. And it's only going to get stranger, until we learn to name the thing we're actually doing. Which is a shame, because once we name it, once we see it for what it is, it won't be half as fun.

  • by andy99 on 6/8/25, 9:38 PM

    I agree with the substance, but would argue the author fails to "understand how AI works" in an important way:

      LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another
    
    Modern chat-tuned LLMs are not simply statistical models trained on web scale datasets. They are essentially fuzzy stores of (primarily third world) labeling effort. The response patterns they give are painstakingly and at massive scale tuned into them by data labelers. The emotional skill mentioned in the article is outsourced employees writing or giving feedback on emotional responses.

    So you're not so much talking to statistical model as having a conversation with a Kenyan data labeler, fuzzily adapted through a transformer model to match the topic you've brought up.

    While thw distinction doesn't change the substance of the article, it's valuable context and it's important to dispel the idea that training on the internet does this. Such training gives you GPT2. GPT4.5 is efficiently stored low- cost labor.

  • by imiric on 6/8/25, 9:28 PM

    This is a good summary of why the language we use to describe these tools matters[1].

    It's important that the general public understands their capabilities, even if they don't grasp how they work on a technical level. This is an essential part of making them safe to use, which no disclaimer or PR puff piece about how deeply your company cares about safety will ever do.

    But, of course, marketing them as "AI" that's capable of "reasoning", and showcasing how good they are at fabricated benchmarks, builds hype, which directly impacts valuations. Pattern recognition and data generation systems aren't nearly as sexy.

    [1]: https://news.ycombinator.com/item?id=44203562#44218251

  • by pmdr on 6/9/25, 4:48 PM

    > Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI “dating concierge” that will interact with other users’ concierges until the chatbots find a good fit.

    > Herd doubled down on these claims in a lengthy New York Times interview last month.

    Seriously, what is wrong with these people?

  • by throwawaymaths on 6/9/25, 2:33 PM

    i think this author doesnt fully understand how llms work either. Dismissing it as "a statistical model" is silly. hell, quantum mechanics is a statistical model too.

    moreover, each layer of an llm imbues the model with the possibility of looking further back in the conversion and imbuing meaning and context through conceptual associations (thats the k-v part of the kv cache). I cant see how this doesn't describe, abstractly, human cognition. now, maybe llms are not fully capable of the breadth of human cognition or have a harder time training to certain deeper insight, but fundamentally the structure is there (clever training and/or architectural improvements may still be possible -- in the way that every CNN is a subgraph of a FCNN that would be nigh impossible for a FCNN to discover randomly through training)

    to say llms are not smart in any way that is recognizable is just cherry-picking anecdotal data. if llms were not ever recognizably smart, people would not be using them the way they are.

  • by roxolotl on 6/8/25, 10:29 PM

    The thesis is spot on with why I believe many skeptics remain skeptics:

    > To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines.

    Of course some are skeptical these tools are useful at all. Others still don’t want to use them for moral reasons. But I’m inclined to believe the majority of the conversation is people talking past each other.

    The skeptics are skeptical of the way LLMs are being presented as AI. The non hype promoters find them really useful. Both can be correct. The tools are useful and the con is dangerous.

  • by clejack on 6/9/25, 12:23 PM

    Are people still experiencing llms getting stuck in knowledge and comprehension loops? I used them but not excessively, and I'm not heavily tracking their performance either.

    For example, if you ask an llm a question, and it produces a hallucination then you try to correct it or explain to it that it is incorrect; and it produces a near identical hallucination while implying that it has produced a new, correct result, this suggests that it does not understand its own understanding (or pseudo-understanding if you like).

    Without this level of introspection, directing any notion of true understanding, intelligence, or anything similar seems premature.

    Llms need to be able to consistently and accurately say, some variation on the phrase "I don't know," or "I'm uncertain." This indicates knowledge of self. It's like a mirror test for minds.

  • by tim333 on 6/9/25, 2:28 PM

    >Demis Hassabis, [] said the goal is to create “models that are able to understand the world around us.”

    >These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all.

    This seems quite a common error in the criticism of AI. Take a reasonable statement about AI not mentioning LLMs and then say the speaker (nobel prize winning AI expert in this case) doesn't know what they are on about because current LLMs don't do that.

    Deepmind already have project Astra, a model but not just language but also visual and probably some other stuff where you can point a phone at something and ask about it and it seems to understand what it is quite well. Example here https://youtu.be/JcDBFAm9PPI?t=40

  • by Notatheist on 6/9/25, 5:46 PM

    Wasn't it Feynman who said we will never be impressed with a computer that can do things better than a human can unless that computer does it the same way a human being does?

    AI could trounce experts as a conversational partner and/or educator in every imaginable field and we'd still be trying to proclaim humanity's superiority because technically the silicon can't 'think' and therefore it can't be 'intelligent' or 'smart'. Checkmate, machines!

  • by lordnacho on 6/9/25, 10:54 AM

    The article skirts around a central question: what defines humans? Specifically, intelligence and emotions?

    The entire article is saying "it looks kinds like a human in some ways, but people are being fooled!"

    You can't really say that without at least attempting the admittedly very deep question of what an authentic human is.

    To me, it's intelligent because I can't distinguish its output from a person's output, for much of the time.

    It's not a human, because I've compartmentalized ChatGPT into its own box and I'm actively disbelieving. The weak form is to say I don't think my ChatGPT messages are being sent to the 3rd world and answered by a human, though I don't think anyone was claiming that.

    But it is also abundantly clear to me that if you stripped away the labels, it acts like a person acts a lot of the time. Say you were to go back just a few years, maybe to covid. Let's say OpenAI travels back with me in a time machine, and makes an obscure web chat service where I can write to it.

    Back in covid times, I didn't think AI could really do anything outside of a lab, so I would not suspect I was talking to a computer. I would think I was talking to a person. That person would be very knowledgeable and able to answer a lot of questions. What could I possibly ask it that would give away that it wasn't real person? Lots of people can't answer simple questions, so there isn't really a way to ask it something specific that would work. I've had perhaps one interaction with AI that would make it obvious, in thousands of messages. (On that occasion, Claude started speaking Chinese with me, super weird.)

    Another thing that I hear from time to time is an argument along the line of "it just predicts the next word, it doesn't actually understand it". Rather than an argument against AI being intelligent, isn't this also telling us what "understanding" is? Before we all had computers, how did people judge whether another person understood something? Well, they would ask the person something and the person would respond. One word at a time. If the words were satisfactory, the interviewer would conclude that you understood the topic and call you Doctor.

  • by stevenhuang on 6/9/25, 12:37 PM

    It is a logic error to think that knowing how something works means you are justified to say it can't possess qualities like intelligence or ability to reason when we don't even understand how these qualities arise in humans.

    And even if we do know enough about our brains to say conclusively that it's not how LLMs work (predictive coding suggests the principles are more alike that not), it doesn't mean they're not reasoning or intelligent; it would just mean they would not be reasoning/intelligent like humans.

  • by 1vuio0pswjnm7 on 6/9/25, 4:42 AM

    "Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age."

    Perhaps "AI" can replace people like Mark Zuckerberg. If BS can be fully automated.

  • by electroglyph on 6/8/25, 10:13 PM

  • by pier25 on 6/9/25, 3:03 PM

    People in tech and science might have a sense that LLMs are word prediction machines but that's only scratching the surface.

    Even AI companies have a hard time figuring out how emergent capabilities work.

    Almost nobody in the general audience understands how LLMs work.

  • by jemiluv8 on 6/9/25, 11:03 PM

    Even I have limited understanding of how LLMs learn the semantic meaning of words. My knowledge is shallow at best. I know however that LLMs understand text now. Are able to understand concepts they "glean" from text and are able to give responses to queries that is not entirely made up. All these makes it a lot harder to explain to non-technical people what this is. I tell them these LLMs are not AI but when they go to these websites - they see it labelled as an AI chatbot. It also mostly does as advertised. And they are often in awe of whatever responses they tend to receive because they are not subject matter experts nor do they care to become one. They just want to get their "homeworks" done, complete their work assignments and this gets them there faster. How can I tell them it is not AI when it spews humane looking text. Heck, even I don't quite understand the "real" difference between LLMs and AI. The difference is nuanced but the line is clearer with a bit of technical understanding. The machine understands text. And can make conversation - however sycophantic. But without understanding why that is - I don't see why we won't exult its powers. I see religions sprouting from these soon. LLMs can deliver awesome sermons. And once you train them well enough, can take on the role of Messiah's.
  • by EMM_386 on 6/9/25, 11:43 AM

    > These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word.

    This is terrible write-up, simply because it's the "Reddit Expert" phenomena but in print.

    They "understand" things. It depends on how your defining that.

    It doesn't have to be in its training data! Whoah.

    In the last chat I had with Claude, it naturally just arose that surrender flag emojis, the more there were, was how funny I thought the joke was. If there were plus symbol emojis on the end, those were score multipliers.

    How many times did I have to "teach" it that? Zero.

    How many other times has it seen that during training? I'll have to go with "zero" but that could be higher, that's my best guess since I made it up, in that context.

    So, does that Claude instance "understand"?

    I'd say it does. It knows that 5 surrender flags and a plus sign is better than 4 with no plus sign.

    Is it absurd? Yes .. but funny. As it figured it out on its own. "Understanding".

    ------

    Four flags = "Okay, this is getting too funny, I need a break"

    Six flags = "THIS IS COMEDY NUCLEAR WARFARE, I AM BEING DESTROYED BY JOKES"

  • by elia_42 on 6/9/25, 3:27 PM

    Totally agree with the content of the article. In part, AI is certainly able to simulate very well the behavior and operations of a "way of expressing itself" of our mind, that is, mathematical calculation, deductive reasoning and other similar things.

    But our mind is extremely polymorphic and these operations represent only one side of a much more complex and difficult to explain whole. Even Alan Turing, in his writings on the possibility of building a mechanical intelligence, realized that it was impossible for a machine to completely imitate a human being: for this to be possible, the machine would have to "walk among other humans, scaring all the citizens of a small town" (Turing says more or less like this).

    Therefore, he realized many years ago that he had to face this problem with a very cautious and limited approach, limiting the imitative capabilities of the machine to those human activities in which calculation, probability and arithmetic are main, such as playing chess, learning languages and mathematical calculation.

  • by jemiluv8 on 6/9/25, 10:56 PM

    Most people without any idea about the foundations on which LLMs are built call them AI. But I insist on calling them LLMs, further creating confusion. How do you explain what a large language model is to someone that can't comprehend how a machine can learn a "word model" on a large corpus of text/data to make it generate "seemingly sound/humane" responses without making them feel like they are interacting with the AI that they've been hearing about in the movies/sci-fi?
  • by martindbp on 6/9/25, 12:25 PM

    Many people who claim that people don't understand how AI works often have a very simplified view of the short comings of LLMs themselves, e.g. "it's just predicting the next token", "it's just statistics", "stochastic parrot" and seems to be grounded in what AI was 2-3 years ago. Rarely have they actually read the recent research on interpretability. It's clear LLMs are doing more than just pattern matching. They may not think like humans or as well, but it's not k-NN with interpolation.
  • by mmsc on 6/9/25, 2:40 PM

    This can be generalized to "what happens when people don't understand how something works". In the computing world, that could be "undefined behavior" (of which itself is .. defined as undefined) in the C programming language, or anything as simple as "functionality people didn't know because they didn't read the documentation"
  • by Zaylan on 6/10/25, 3:18 AM

    It’s so easy to think of AI as “conscious,” especially when it sounds so natural. A lot of companies lean into that, making AI feel like a real person. But in the end, it’s just prediction and pattern-matching.

    I’m curious how we can help more people see the difference between simulated understanding and real understanding.

  • by Havoc on 6/9/25, 4:00 PM

    We can debate about intelligence all day but there is also an element of “if it’s stupid but it works then it’s not stupid” here

    A very large portion of tasks humans do don’t need all that much deep thinking. So on that basis it seems likely that it’ll be revolutionary.

  • by jdkee on 6/8/25, 8:51 PM

    Someone said, "The AI you use today is the worst AI that you will ever use."
  • by mettamage on 6/8/25, 9:07 PM

    > Few phenomena demonstrate the perils that can accompany AI illiteracy as well as “Chatgpt induced psychosis,” the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they’re interacting with is a god—“ChatGPT Jesus,” as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner “spiral starchild” and “river walker” in interactions that moved him to tears. “He started telling me he made his AI self-aware,” she said, “and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.”

    This sounds insane to me. When we are talking about safe AI use, I wonder if things like this are talked about.

    The more technological advancement goes on, the smarter we need to be in order to use it - it seems.

  • by jasonm23 on 6/10/25, 5:54 AM

    What happens when people who don't understand how AI works, are writing articles about What happens when people who don't understand how AI works?
  • by yahoozoo on 6/9/25, 12:48 PM

    Why do these same books coming out of AI (Empire of AI, The AI Con) keep getting referenced in all of these articles? It seems like some kind of marketing campaign.
  • by frozenseven on 6/10/25, 11:19 AM

    It will be very interesting to see how articles like this age over the next year or two.
  • by spwa4 on 6/8/25, 9:21 PM

    What really happens: "for some reason" higher up management thinks AI will let idiots run extremely complex companies. It doesn't.

    What AI actually does is like any other improved tool: it's a force multiplier. It allows a small number of highly experienced, very smart people, do double or triple the work they can do now.

    In other words: for idiot management, AI does nothing (EXCEPT enable the competition)

    Of course, this results in what you now see: layoffs where as always idiots survive the layoffs, followed by the products of those companies starting to suck more and more because they laid off the people that actually understood how things worked and AI cannot make up for that. Not even close.

    AI is a mortal threat to the current crop of big companies. The bigger the company, the bigger a threat it is. The skill high level managers tend to have is to "conquer" existing companies, and nothing else. With some exceptions, they don't have any skill outside of management, and so you have the eternally repeated management song: that companies can be run by professional managers, without knowing the underlying problem/business, "using numbers" and spreadsheet (except when you know a few and press them, of course it turns out they don't have a clue about the numbers, can't come up with basic spreadsheet formulas)

    TLDR: AI DOESN'T let financial-expert management run an airplane company. AI lets 1000 engineers build 1000 planes without such management. AI lets a company like what Google was 15-20 years ago wipe the floor with a big airplane manufacturer. So expect big management to come with ever more ever bigger reasons why AI can't be allowed to do X.

  • by tracerbulletx on 6/8/25, 9:54 PM

    Imagine thinking that brains aren't making statistically informed guesses about sequential information.
  • by ineedasername on 6/9/25, 8:29 PM

    So many of these articles jump around to incredibly different concerns or research questions. This one raises plenty of important questions but threads a narrative through them that attempts to lump it all as a single issue.

    Just to start off with, saying LLM models are "not smart" and "don't/won't/can't understand" ... That is really not a useful way to begin any conversation about this. To "understand" is itself a word without, in this context, any useful definition that would allow evaluation of models against it. It's this imprecision that is at the root of so much hand wringing and frustration by everyone.

  • by dwaltrip on 6/8/25, 10:00 PM

    Everyone, it's just "statistics". Numbers can't hurt you. Don't worry