by mef on 5/25/25, 5:07 PM with 359 comments
by atdt on 5/26/25, 1:19 AM
That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.
by papaver-somnamb on 5/26/25, 3:01 AM
LLM designs to date are purely statistical models. A pile, a morass of floating point numbers and their weighted relationships, along with the software and hardware that animates them and the user input and output that makes them valuable to us. An index of the data fed into them, different from a Lucene or SQL DB index made from compsci algorithms & data structure primitives. Recognizable to Azimov's definition.
And these LLMs feature no symbolic reasoning whatsoever within their computational substrate. What they do feature is a simple recursive model: Given the input so far, what is the next token? And they are thus enabled after training on huge amounts of input material. No inherent reasoning capabilities, no primordial ability to apply logic, or even infer basic axioms of logic, reasoning, thought. And therefore unrecognizable to Chomsky's definition.
So our LLMs are a mere parlor trick. A one-trick pony. But the trick they do is oh-so vastly complicated, and very appealing to us, of practical application and real value. It harkens back to the question: What is the nature of intelligence? And how to define it?
And I say this while thinking of the marked contrast of apparent intelligence between an LLM and say a 2-year age child.
by zombot on 5/26/25, 6:33 AM
by whattheheckheck on 5/25/25, 5:25 PM
by calibas on 5/25/25, 6:48 PM
And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them in a way that wasn't possible before LLMs. The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).
by Xmd5a on 5/26/25, 8:58 AM
These days, Chomsky is working on Hopf algebras (originally from quantum physics) to explain language structure.
by visarga on 5/26/25, 9:28 AM
It's like wondering how well your shoes fit your feet, forgetting that shoes are made and chosen to fit your feet in the first place.
by asmeurer on 5/25/25, 6:08 PM
by lucisferre on 5/25/25, 6:26 PM
This quote brought to mind the very different technological development path of the spider species in Adrian Tchaikovsky's Children of Time. They used pheromones to 'program' a race of ants to do computation.
by teleforce on 5/26/25, 10:22 AM
Chomsky made interesting points regarding the performance of AI with the performance of biological organisms in comparison to human but his conclusion is not correct. We already know that cheetah run faster human and elephant is far stronger than human. Bat can navigate in the dark with echo location and dolphin can hunt in synchronization with high precision coordination in pack to devastating effect compared to silo hunting.
Whether we like or not human is the the top unlike the claim of otherwise by Chomsky. By scientific discovery (understanding) and designing (engineering) by utilizing law of nature, human can and has surpassed all of the cognitive capabilities of these petty animals, and we're mostly responsible for their inevitable demise and extinction. Human now need to collectively and consciously reverse the extinction process of these "superior" cognitive animals in order to preserve these animals for better or worst. No other earth bound creature can do that to us.
by mrmdp on 5/26/25, 7:24 PM
Many of the comments herein lack that feature and seem to convey that the author might be full of him(her)self.
Also, some of the comment are a bit pejorative.
by bawana on 5/26/25, 12:45 PM
OTOH, consider LLMs as a roomful of monkeys that can communicate to each other, look at words,sentences and paragraphs on posters around the room with a human in the room that gives them a banana when they type out a new word, sentence or paragraph.
You may eventually get a roomful of monkeys that can respond to a new sentence you give them with what seems an intelligent reply. And since language is the creation of humans, it represents an abstraction of the world made by humans.
by ggm on 5/26/25, 2:45 AM
I happen to agree with his view, so i came armed to agree and read this with a view in mind which I felt was reinforced. People are overstating the AGI qualities and misapplying the tool, sometimes the same people.
In particular, the lack of theory, and scientific method means both we're, not learning much, and we've rei-ified the machine.
I was disappointed nothing said of Norbert Weiner. A man who invented cybernetics and had the courage to stand up to the military industrial complex.
by skydhash on 5/25/25, 5:37 PM
But what we're good as using all of our capabilities to transform the world around us according to an internal model that is partially shared between individuals. And we have complete control over that internal model, diverging from reality and converging towards it on whims.
So we can't produce and manipulate text faster, but rarely the end game is to produce and manipulate text. Mostly it's about sharing ideas and facts (aka internal models) and the control is ultimately what matters. It can help us, just like a calculator can help us solve an equation.
EDIT
After learning to draw, I have that internal model that I switch to whenever I want to sketch something. It's like a special mode of observation, where you no longer simply see, but pickup a lot of extra details according to all the drawing rules you internalized. There's not a lot, they're just intrinsically connected with each other. The difficult part is hand-eye coordination and analyzing the divergences between what you see and the internal model.
I think that's why a lot of artists are disgusted with AI generators. There's no internal models. Trying to extract one from a generated picture is a futile exercice. Same with generated texts. Alterations from the common understanding follows no patterns.
by schoen on 5/25/25, 5:24 PM
by ashoeafoot on 5/26/25, 1:43 PM
"To characterize a structural analysis of state violence as “apologia” reveals more about prevailing ideological filters than about the critique itself. If one examines the historical record without selective outrage, the pattern is clear—and uncomfortable for all who prefer myths to mechanisms." the fake academic facade, the us diabolism, the unwillingness to see complexity and responsibility in other its all with us forever ..
by caycep on 6/2/25, 8:02 PM
by r00sty on 5/26/25, 3:36 PM
by oysterville on 5/25/25, 7:04 PM
by Amadiro on 5/26/25, 10:14 AM
> We can make a rough distinction between pure engineering and science. There is no sharp boundary, but it’s a useful first approximation. Pure engineering seeks to produce a product that may be of some use. Science seeks understanding. If the topic is human intelligence, or cognitive capacities of other organisms, science seeks understanding of these biological systems.
If you take this approach, of course it follows that we should laugh at Tom Jones.
But a more differentiated approach is to recognize that science also falls into (at least) two categories; the science that we do because it expands our capability into something that we were previously incapable of, and the one that does not. (we typically do a lot more of the former than the latter, for obvious practical reasons)
Of course it is interesting from a historical perspective to understand the seafaring exploits of Polynesians, but as soon as there was a better way of navigating (i.e. by stars or by GPS) the investigation of this matter was relegated to the second type of science, more of a historical kind of investigation. Fundamentally we investigate things in science that are interesting because we believe the understanding we can gain from it can move us forwards somehow.
Could it be interesting to understand how Hamilton was thinking when he came up with imaginary numbers? Sure. Are a lot of mathematicians today concerning themselves with studying this? No, because the frontier has been moved far beyond.*
When you take this view, it´s clear that his statement
> These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.
is not warranted. Consider the following, in his own analogy:
> These considerations bring up a minor problem with the current GPS enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at ones. But there are much more serious problems than absurdity. One is that GPS systems are designed in such a way that they cannot tell us anything about navigation, planning routes or other aspects of orientation, a matter of principle, irremediable.
* I´m making a simplifying assumption here that we can´t learn anything useful for modern navigation anymore from studying Polynesians or ants; this might well be untrue, but that is also the case for learning something about language from LLMs, which according to Chomsky is apparently impossible and not even up for debate.
by titzer on 5/25/25, 5:53 PM
While there's some things in this I find myself nodding along to in this, I can't help but feel it's an a really old take that is super vague and hand-wavy. The truth is that all of the progress on machine learning is absolutely science. We understand extremely well how to make neural networks learn efficiently; it's why the data leads anywhere at all. Backpropagation and gradient descent are extraordinarily powerful. Not to mention all the "just engineering" of making chips crunch incredible amounts of numbers.
Chomsky is extremely ungenerous to the progress and also pretty flippant about what this stuff can do.
I think we should probably stop listening to Chomsky; he hasn't said anything here that he hasn't already say a thousand times for decades.
by prpl on 5/25/25, 5:55 PM
What is elegant as a model is not always what works, and working towards a clean model to explain everything from a model that works is fraught, hard work.
I don’t think anyone alive will realize true “AGI”, but it won’t matter. You don’t need it, the same way particle physics doesn’t need elegance
by LudwigNagasena on 5/25/25, 6:16 PM
by retskrad on 5/25/25, 5:52 PM
by jdkee on 5/26/25, 4:59 AM
https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chat...
by submeta on 5/25/25, 5:47 PM
by msh on 5/25/25, 5:25 PM
by godelski on 5/26/25, 2:42 AM
If you're only going to read one part, I think it is this:
| I mentioned insect navigation, which is an astonishing achievement. Insect scientists have made much progress in studying how it is achieved, though the neurophysiology, a very difficult matter, remains elusive, along with evolution of the systems. The same is true of the amazing feats of birds and sea turtles that travel thousands of miles and unerringly return to the place of origin.
| Suppose Tom Jones, a proponent of engineering AI, comes along and says: “Your work has all been refuted. The problem is solved. Commercial airline pilots achieve the same or even better results all the time.”
| If even bothering to respond, we’d laugh.
| Take the case of the seafaring exploits of Polynesians, still alive among Indigenous tribes, using stars, wind, currents to land their canoes at a designated spot hundreds of miles away. This too has been the topic of much research to find out how they do it. Tom Jones has the answer: “Stop wasting your time; naval vessels do it all the time.”
| Same response.
It is easy to look at metrics of performance and call things solved. But there's much more depth to these problems than our abilities to solve some task. It's not about just the ability to do something, the how matters. It isn't important that we are able to do better at navigating than birds or insects. Our achievements say nothing about what they do.This would be like saying we developed a good algorithm only my looking at it's ability to do some task. Certainly that is an important part, and even a core reason for why we program in the first place! But its performance tells us little to nothing about its implementation. The implementation still matters! Are we making good uses of our resources? Certainly we want to be efficient, in an effort to drive down costs. Are there flaws or errors that we didn't catch in our measurements? Those things come at huge costs and fundamentally limit our programs in the first place. The task performance tells us nothing about the vulnerability to hackers nor what their exploits will cost our business.
That's what he's talking about.
Just because you can do something well doesn't mean you have a good understanding. It's natural to think the two relate because understanding improves performance that that's primarily how we drive our education. But this is not a necessary condition and we have a long history demonstrating that. I'm quite surprised this concept is so contentious among programmers. We've seen the follies of using test driven development. Fundamentally, that is the same. There's more depth than what we can measure here and we should not be quick to presume that good performance is the same as understanding[0,1]. We KNOW this isn't true[2].
I agree with Chomsky, it is laughable. It is laughable to think that the man in The Chinese Room[3] must understand Chinese. 40 years in, on a conversation hundreds of years old. Surely we know you can get a good grade on a test without actually knowing the material. Hell, there's a trivial case of just having the answer sheet.
[0] https://www.reddit.com/r/singularity/comments/1dhlvzh/geoffr...
[1] https://www.youtube.com/watch?v=Yf1o0TQzry8&t=449s
by paulsutter on 5/25/25, 5:51 PM
Quoting Chomsky:
> These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.
> One is that the LLM systems are designed in such a way that they cannot tell us anything about language, learning, or other aspects of cognition, a matter of principle, irremediable... The reason is elementary: The systems work just as well with impossible languages that infants cannot acquire as with those they acquire quickly and virtually reflexively.
Response from o3:
LLMs do surface real linguistic structure:
• Hidden syntax: Attention heads in GPT-style models line up with dependency trees and phrase boundaries—even though no parser labels were ever provided. Researchers have used these heads to recover grammars for dozens of languages.
• Typology signals: In multilingual models, languages that share word-order or morphology cluster together in embedding space, letting linguists spot family relationships and outliers automatically.
• Limits shown by contrast tests: When you feed them “impossible” languages (e.g., mirror-order or random-agreement versions of English), perplexity explodes and structure heads disappear—evidence that the models do encode natural-language constraints.
• Psycholinguistic fit: The probability spikes LLMs assign to next-words predict human reading-time slow-downs (garden-paths, agreement attraction, etc.) almost as well as classic hand-built models.
These empirical hooks are already informing syntax, acquisition, and typology research—hardly “nothing to say about language.”
by netcan on 5/25/25, 5:43 PM
This is what Chomsky always wanted ai to be... especially language ai. Clever solutions to complex problems. Simple once you know how they work. Elegant.
I sympathize. I'm a curious human. We like elegant, simple revelations that reveal how out complex world is really simple once you know it's secrets. This aesthetic has also been productive.
And yet... maybe some things are complicated. Maybe LLMs do teach us something about language... that language is complicated.
So sure. You can certainly critique "ai blogosphere" for exuberance and big speculative claims. That part is true. Otoh... linguistics is one of the areas that ai based research may turn up some new insights.
Overall... what wins is what is most productive.
by newAccount2025 on 5/25/25, 5:55 PM
by AIorNot on 5/26/25, 1:29 AM
Understanding Linguistics before LLMs:
“We think Birds fly by flapping their wings”
Understanding Linguistics Theories after LLMs:
“Understanding the physics of Aerofoils and Bernoulli’s principle mean we can replicate what birds do”
by dragochat on 5/26/25, 6:26 AM
by thasso on 5/25/25, 5:51 PM
By whom?
by Orangeair on 5/25/25, 5:56 PM
by 0xDEAFBEAD on 5/25/25, 5:56 PM
"The first principle is that you must not fool yourself, and you are the easiest person to fool."
by lanfeust6 on 5/25/25, 5:56 PM
by A4ET8a8uTh0_v2 on 5/25/25, 6:40 PM
Not that I am an LLM zealot. Frankly, some of the clear trajectory it puts humans on makes me question our futures in this timeline. But even if I am not a zealot, but merely an amused, but bored middle class rube, the serious issues with it ( privacy, detailed personal profiling that surpasses existing systems, energy use, and actual power of those who wield it ), I can see it being implemented everywhere with a mix of glee and annoyance.
I know for a fact it will break things and break things hard and it will be people, who know how things actually work that will need to fix those.
I will be very honest though. I think Chomsky is stuck in his internal model of the world and unable to shake it off. Even his arguments fall flat, because they don't fit the domain well. It seems like they should given that he practically made his name on syntax theory ( which suggests his thoughts should translate well into it ) and yet.. they don't.
I have a minor pet theory on this, but I am still working on putting it into some coherent words.
by petermcneeley on 5/25/25, 5:37 PM
by mrandish on 5/25/25, 5:35 PM
by kevinventullo on 5/26/25, 1:20 AM
Who would make such a claim? LLM’s are of course incredible, but it seems obvious that their mechanism is quite different than the human brain.
I think the best you can say is that one could motivate lines of inquiry in human understanding, especially because we can essentially do brain surgery on an LLM in action in a way that we can’t with humans.
by johnfn on 5/25/25, 5:40 PM
> Again, we’d laugh. Or should.
Should we? This reminds me acutely of imaginary numbers. They are a great theory of numbers that can list many numbers that do 'exist' and many that can't possibly 'exist'. And we did laugh when imaginary numbers were first introduced - the name itself was intended as a derogatory term for the concept. But who's laughing now?
by irrational on 5/26/25, 2:40 AM
by next_xibalba on 5/25/25, 5:32 PM