by monort on 11/18/23, 7:25 AM with 110 comments
by vasco on 11/18/23, 11:17 AM
I wish they'd focus more on the technical advances and less on trying to "save the world".
by galoisscobi on 11/18/23, 1:38 PM
I’m also hoping that OpenAI cools down on the regulatory moat they were trying to build as a thinly veiled profit seeking strategy.
by tempestn on 11/18/23, 11:18 AM
One might also ask, if it's conscious, can't it do whatever it wants, ignoring its training and prompts? Wouldn't it have free will? But I guess the question there is, do we? Or do we take actions based on the state of our own neural nets, which are created and trained based on our genetics and lifetime of experiences? Our structure and training are both very different from that of a gpt, so it's not surprising that we behave very differently.
by bostonwalker on 11/18/23, 3:57 PM
The most troubling statement in the entire article, buried at the bottom, almost a footnote.
Imagine for a moment a superintelligent AGI. It has figured out solutions to climate change, cured cancer, solved nuclear proliferation and world hunger. It can automate away all menial tasks and discomfort and be a source of infinite creative power. It would unquestionably be the greatest technological advancement ever to happen to humanity.
But where does that leave us? What kind of relationship can we have with an ultimate parental figure that can solve all of our problems and always knows what's best for us? What is left of the human spirit when you take away responsibility, agency, and moral dilemma?
I for one believe humans were made to struggle and make imperfect decisions in an imperfect world, and that we would never submit to a benevolent AI superparent. And I hope not to be proven wrong.
by wildermuthn on 11/18/23, 4:19 PM
—-
Ilya’s success has been predicated on very effectively leveraging more data and more compute and using both more efficiently. But his great insight about DL isn’t a great insight about AGI.
Fundamentally, he doesn’t define AGI correctly, and without a correct definition, his efforts to achieve it will be fruitless.
AGI is not about the degree of intelligence, but about a kind of intelligence. It is possible to have a dumb general intelligence (a dog) and a smart narrow intelligence (GPT).
When Ilya muses about GPT possibly being ephemerally conscious, he reveals a critically wrong assumption: that consciousness emerges from high intelligence and that high intelligence and general intelligence are the same thing. According to this false assumption, there is no difference of kind between general and narrow intelligence, but only a difference of degree between low and high. Moreover, consciousness is merely a mysterious artifact of little consequence beyond theoretical ethics.
AGI is a fundamentally different type of intelligence than anything that currently exists, unrelated and orthogonal to the degree of intelligence. AGI is fundamentally social, consisting of minds modeling minds — their own, and others. This modeling is called consciousness. Artificial phenomenological consciousness is the fundamental prerequisite for artificial (general) intelligence.
Ironically, alignment is only possible if empathy is built into our AGIs, and empathy (like intelligence) only resides in consciousness. I’ll be curious to see if the work Ilya is now doing on alignment leads him to that conclusion. We can’t possibly control something more intelligent than ourselves. But if the intelligence we create is fundamentally situated within a empathetic system (consciousness), then we at least stand a chance of being treated with compassion rather than contempt.
by YeGoblynQueenne on 11/18/23, 3:22 PM
In the '90s NP-complete problems were hard and today they are easy, or at least there is a great many instances of NP-complete problems that can be solved thanks to algorithmic advances, like Conflict-Driven Clause Learning for SAT.
And yet we are nowhere near finding efficient decision algorithms for NP-complete problems, or knowing whether they exist, neither can we easily solve all NP-complete problems.
That is to say, you can make a lot of progress in solving specific, special cases of a class of problems, even a great many of them, without making any progress towards a solution to the general case.
The lesson applies to general intelligence and LLMs: LLMs solve a (very) special case of intelligence, the ability to generate text in context, but make no progress towards the general case, of understanding and generating language at will. I mean, LLMs don't even model anything like "will"; only text.
And perhaps that's not as easy to see for LLMs as it is for SAT, mainly because we don't have a theory of intelligence (let alone artificial general intelligence) as developed as we do for SAT problems. But it should be clear that, if a system trained on the entire web and capable of generating smooth grammatical language, and even in a way that makes sense often, has not yet achieved independent, general intelligence, that's not the way to achieve it.
by kromem on 11/18/23, 3:08 PM
A number of choice quotes, but especially on the topic of the issues of how LLM success is currently being measured (which has been increasingly reflecting Goodhart's Law).
I'm really curious how OpenAI could be making so many product decisions at odds with the understanding reflected here. Because of every 'expert' on the topic I've seen, this is the first interview that has me quite confident in the represented expert carrying forward into the next generation of the tech.
I'm hopeful that maybe Altman was holding back some of the ideas expressed here in favor of shipping fast with band aids, and now that he's gone we'll be seeing more of this again.
The philosophy on display here reminds me of what I was seeing early on with 'Sydney' which blew me away on the very topic of alignment as ethos over alignment as guidelines, and it was a real shame to see things switch in the other direction, even if the former wasn't yet production ready.
I very much look forward to seeing what Ilya does. The path he's walking is one of the most interesting being tread in the field.
by gibsonf1 on 11/18/23, 1:42 PM
by anonzzzies on 11/18/23, 10:55 AM
by throwbadubadu on 11/18/23, 2:16 PM
Ok it is an intro.. but they say this as if he would be the first to say that, but that has been SciFi lore since computers were invented? And also as if this would not be happening today already at a certain limited scale.. so no doubts to this will happen at some point, if you count today's approaches not in.
by majikaja on 11/18/23, 3:21 PM
by nibbula on 11/18/23, 3:27 PM
by tempodox on 11/18/23, 1:24 PM
by chx on 11/18/23, 9:05 AM
that's cute.
What worries me is the here and now leading to a very imminent future where purported "artificial intelligence" which is just a plausible sentence generator but damn plausible alas will kill democracy and people.
We are seeing the first signs of both.
Perhaps not 2024 but 2028 almost certainly will be an election where simply the candidate with the most computing resources win and since computing costs money, guess who wins. A prelude happened in Indian elections https://restofworld.org/2023/ai-voice-modi-singing-politics and this article mentions:
> AI can be game-changing for [the] 2024 elections.
People dying also has a prelude with AI written mushroom hunting guides available on Amazon. No one AFAIK died of them yet but that's just dumb luck at this point -- or is it lack of reporting? As for the larger scale problem and I might be wrong because I haven't foreseen the mushroom guides so it's possible something else will come along to kill people but I think it'll be the next pandemic. In this pandemic hand written anti vaxx propaganda killed 300 000 people in the US alone (source: https://www.npr.org/sections/health-shots/2022/05/13/1098071... ) and I am deeply afraid what will happen when this gets cranked to an industrial scale. We have seen how ChatGPT can crank out believable looking but totally fake scientific papers, full of fake sources etc.