by RA2lover on 6/2/24, 3:57 PM with 65 comments
by mortenjorck on 6/3/24, 2:22 AM
For a couple of years now, I've had this half-articulated sense that the uncanny ability of sufficiently-advanced language models to back into convincing simulations of conscious thought entirely via predicting language tokens means something profound about the nature of language itself.
I'm sure there are much smarter people than I thinking about this (and probably quite a bit of background reading that would help; Chomsky, perhaps McLuhan?) but it feels like, in parallel to everything going on in the development of LLMs, there's also something big about us waiting there under the surface.
by szvsw on 6/2/24, 10:43 PM
by zharknado on 6/2/24, 10:48 PM
This seems plausible, and amazing or terrible depending on the application.
An amazing application would be textbooks that adapt to use examples, analogies, pacing, etc. that enhance the reader’s engagement and understanding.
An unfortunate application would be mapping which features are persuasive to individual users for hyper-targeted advertising and propaganda.
A terrible application would be tracking latent political dissent to punish people for thought-crime.
by kepano on 6/2/24, 9:08 PM
Partially it's because we're still wrapping our heads around what kind of experience this might enable. The tools still feel ahead of the medium. I think we're closer to Niépce than Muybridge.
In photography terms, we've just figured out how to capture photons on paper — and artists haven't figured out how to use that to make something interesting.
by sebmellen on 6/2/24, 11:43 PM
The full quote is more psychedelic, in the context of his experience with so-called ‘jeweled self-dribbling basketballs’ he would encounter on DMT trips, who he said were made of a kind of language, or ‘syntax binding light’:
“You wonder what to make of it. I’ve thought about this for years and years and years, and I don’t know why there should be an invisible syntactical intelligence giving language lessons in hyperspace. That certainly, consistently seems to be what is happening.
I’ve thought a lot about language as a result of that. First of all, it is the most remarkable thing we do.
Chomsky showed the deep structure of language is under genetic control, but that’s like the assembly language level. Local expressions of language are epigenetic.
It seems to me that language is some kind of enterprise of human beings that is not finished.
We have now left the grunts and the digs of the elbow somewhat in the dust. But the most articulate, brilliantly pronounced and projected English or French or German or Chinese is still a poor carrier of our intent. A very limited bandwidth for the intense compression of data that we are trying to put across to each other. Intense compression.
It occurs to me, the ratios of the senses, the ratio between the eye and the ear, and so forth, this also is not genetically fixed. There are ear cultures and there are eye cultures. Print cultures and electronic cultures. So, it may be that our perfection and our completion lies in the perfection and completion of the word.
Again, this curious theme of the word and its effort to concretize itself. A language that you can see is far less ambiguous than a language that you hear. If I read the paragraph of Proust, then we could spend the rest of the afternoon discussing, what did he mean? But if we look at a piece of sculpture by Henry Moore, we can discuss, what did he mean, but at a certain level, there is a kind of shared bedrock that isn’t in the Proust passage. We each stop at a different level with the textual passage. With the three-dimensional object, we all sort of start from the same place and then work out our interpretations. Is it a nude, is it an animal? Is it bronze, is it wood? Is it poignant, is it comical? So forth and so on.”
This post feels like the beginning of that concretization.
by 082349872349872 on 6/2/24, 6:34 PM
Why do I suspect the offence will always be ahead of the defence in these areas?
I'd earlier suggested that everyone, in elementary school, ought to watch Ancient Aliens and attempt to note the moment where each episode jumps the shark. I take it we could attempt this with LLMs, now?
by Animats on 6/2/24, 9:12 PM
by Terr_ on 6/3/24, 5:16 PM
Given how long these have been pored over by existing hyperconnected nanomachine networks (i.e. brains) it may be that we'll mostly unearth qualities humans can already detect, even if only subconsciously.
When it comes to separating truth and lies, perhaps the real trick the computer will bring is removing context, e.g. scoring text without confirmation bias towards its conclusion.
by dhosek on 6/2/24, 8:19 PM
https://en.wikipedia.org/wiki/Eadweard_Muybridge
(the article doesn’t bother to mention any of this until near the end in the tl;dr section, which since it’s tl and you dr, you never got to).
by nkurz on 6/3/24, 1:24 AM
Great essay, but this small comment toward the end of the essay confused me. Is he saying that dogs never gallop?
I'm still not sure about the answer breed-by-breed, but searching for it led me to this interesting page illustrating different dog gaits: https://vanat.ahc.umn.edu/gaits/index.html
In particular, it seems to say that at least some dogs do the same "transverse gallop" that horses use: https://vanat.ahc.umn.edu/gaits/transGallop.html
And that greyhounds at least also do a "rotary gallop": https://vanat.ahc.umn.edu/gaits/rotGallop.html
I have a Vizsla (one of several breeds in the running for second fastest breed after greyhounds) and my guess is that she at times does both gallops. I can't find a reference to confirm this, though.
by failrate on 6/3/24, 12:19 AM
by qup on 6/2/24, 8:24 PM
Site is struggling
by nickreese on 6/2/24, 9:05 PM
by kaycebasques on 6/2/24, 11:57 PM
by anigbrowl on 6/2/24, 8:56 PM
by lettergram on 6/3/24, 1:05 PM
We discover innovative ideas in companies and help them protect their IP.