by miobrien on 6/13/23, 1:27 PM with 161 comments
by Workaccount2 on 6/13/23, 2:36 PM
Basically there is this innate idea that if the basic building blocks are simple systems with deterministic behavior, then the greater system can never be more than that. I've seen this is spades within the AI community, "It's just matrix multiplication! It's not capable of thinking or feeling!"
Which to me always felt more like a hopeful statement rather than a factual one. These guys have no idea what consciousness is (nobody does) nor have any reference point for what exactly is "thinking" or "feeling". They can't prove I'm not a stochastic parrot anymore than they can prove whatever cutting edge LLM isn't.
So while yes, present LLMs likely are just stochastic parrots, the same technology scaled might bring us a model that actually is "something that is something to be like", and we'll have everyone treating it with reckless carelessness because "its just a stochastic parrot".
by isp on 6/13/23, 2:32 PM
> Optimist: AI has achieved human-level performance!
> Realist: “AI” is a collection of brittle hacks that, under very specific circumstances, mimic the surface appearance of intelligence.
> Pessimist: AI has achieved human-level performance.
by mach1ne on 6/13/23, 2:27 PM
This might be the first time the term was seen in an ’official’ context, but is it really the origin? It feels like the term has been hovering around for longer, and even Google Trends shows significant search trends way before 2021
by samgilb on 6/13/23, 3:48 PM
by rsynnott on 6/13/23, 2:20 PM
I was initially thinking "well, yes, Nobel Prize for Stating the Obvious there", but looks like the paper was written in the far distant past of 2021, when LLMs were largely still in their babbling obvious nonsense stage, rather than the current state of the art, where they babble dangerously convincing nonsense, so, well, fair enough I suppose.
Amazing how fast progress has been there, though it's progress in an arguably rather worrying direction, of course.
by seydor on 6/13/23, 2:34 PM
The term in general seems to be unfortunate because the models seem to do more than parroting. LLMs are more like central pattern generators of the nervous systems, able to flexibly create well coordinated patterns when guided appropriately
by dekhn on 6/13/23, 3:42 PM
At what point does a stochastic parrot fake it till it makes it? Does it even matter? We can imagine that, within 10 years, we'll have a fully synthetic virtual human simulator- a generative AI combined with knowledge base, language parsing, audio and video recognition, basically a talking head that could join your next technical meeting and look like full contributor. If that happens, will the Timnits and the Benders of the world admit that, perhaps, systems which are indistinguishable from a human may not just be parrots, or perhaps, we are just sufficiently advanced parrotS?
Seen from that perspective, the promoters of stochastic parrots would seem to be luddites and close-minded, as well as discouraging legitimate, important, and valuable scientific research.
by renewiltord on 6/13/23, 3:29 PM
The organizations that listened to these people for even some amount of time got hosed in this situation. Google managed to oust this flock from within but not before their AIs were so lobotomized that they are wildly renowned for being the village idiot.
Ultimately, this paper is a triumph of branding over science. Read it if you'd like. But if you let these kinds of people into your organization, they'll cripple it. It costs a lot to get them out. Instead, simply never let them in.
by the8472 on 6/13/23, 4:07 PM
by rchaud on 6/13/23, 3:49 PM
Everything we revile about online recipe websites that spend 1000 words about the history of cooking before getting to the point, will be part and parcel of AI-written anything. It won't be properly proofread or edited by a human, because that would defeat the purpose.
by adamsmith143 on 6/13/23, 3:59 PM
by dehrmann on 6/13/23, 3:14 PM
by api on 6/13/23, 2:24 PM
What these LLMs and diffusion models and such actually are is a lossy compression method that permits structural queries. The fact that they can learn structure as well as content allows them to reason as well, but only to the extent that the rules they’re following existed somewhere in the training data and its structure.
If one were given access to senses and memory and feedback mechanisms and learned language that way, it might be considered actually intelligent or even sentient if it exhibited autonomy and value judgments.
by Invictus0 on 6/13/23, 2:32 PM
by hackandthink on 6/13/23, 5:01 PM
"Meaning without reference in large language models"
"we argue that LLM likely capture important aspects of meaning, and moreover work in a way that approximates a compelling account of human cognition in which meaning arises from con- ceptual role"
https://arxiv.org/pdf/2208.02957.pdf
I remember Quine's meaning holism it seems to be related.
by RHSman2 on 6/13/23, 6:08 PM
by cubefox on 6/13/23, 2:51 PM
by aaroninsf on 6/13/23, 5:22 PM
because such accounts are both accurate, and deeply misleading.
This is description, but it is neither predictive, nor explanatory.
It implies a false model, rather than providing one.
Evergreen:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon. Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
by koalala on 6/13/23, 4:21 PM
It seems to me that the great success transformers are now enjoying is precisely due to the fact that 'probabilistic information about how they combine' _is_ meaning.
by nologic01 on 6/13/23, 6:10 PM
by browningstreet on 6/13/23, 6:32 PM
by constantcrying on 6/13/23, 3:53 PM