by corford on 4/8/24, 8:29 PM with 31 comments
by jlmorton on 4/8/24, 8:45 PM
Not directly the point of the article, but is it fair to say driverless cars are still just demos when they're operating on every street, road, and freeway from San Francisco to San Jose, with tens of millions of passenger miles?
I feel like once there are paying customers sitting in the vehicles, it's not a demo, it's a reality.
by mnk47 on 4/8/24, 11:59 PM
Scaling. That's it. They emphasized multiple times throughout the talk that this is what they achieved with the simplest, most naive approach.
by Strilanc on 4/8/24, 8:59 PM
by proc0 on 4/8/24, 9:20 PM
We know gut bacteria affects the brain and how emotions are also linked to the state of our bodies, I think there is a knowledge gap in our understanding of intelligence that involves the necessity for embodiment.
Our bodies are potentially doing a big part of the "computations" that make up our ability to have general intelligence. This would also explain a lot of how lower level animals like insects are able to display complex behavior with much simpler brains. AGI might be such a hard problem because it's not just about recreating the "computations" of the brain, but rather the "computations" of an entire organism, where the brain is only doing the coordination and self-awareness.
by vinni2 on 4/8/24, 9:01 PM
Dario Amodei from Anthropic. https://www.dwarkeshpatel.com/p/will-scaling-work
by bevekspldnw on 4/8/24, 8:43 PM
Easy.
by xanderlewis on 4/8/24, 8:52 PM
Ironically, an LLM could probably help him out.
by standapart on 4/8/24, 8:50 PM
by stared on 4/8/24, 8:51 PM
Well, I was shocked to see LLMs (rather than something intrinsically related to Reinforcement Learning) reach the level of GPT-3.5, not even to mention GPT-4.
For starters, he should define what AGI means. By some criteria, it does not exist (no free lunch theorem and stuff). Some others say that GPT-4 already fulfils that. So, the question to the author is: can he say which AGI he means, and would he actually bet money on this claim?
by xyst on 4/8/24, 8:49 PM
by drbig on 4/8/24, 8:57 PM
1) LLMs at their core are an auto-complete solution. An extremely good solution! But nothing more, even with all the accoutrements of prompt engineering/injection and whatever other "support systems" (_crutches_) you can think of.
I'll end with my own paraphrasing of a great reply I got in this very forum some time ago: Bugs Bunny isn't funny. Bugs Bunny doesn't, nor ever existed. The people _writing him_ had a sense of humor. Now replace Bugs Bunny with whatever (very, extremely) flawed image of """an AI persona""" you have.