by dspoka on 12/14/24, 5:38 AM with 31 comments
by nopinsight on 12/14/24, 7:08 AM
The better the models are at reasoning, the less predictable they become.
This is analogous to top chess engines which make surprising moves that even human super grandmasters can't always understand.
Thus, the future will be unpredictable (if superintelligence takes control).
Link to the full talk & the time he talks about this: https://youtu.be/1yvBqasHLZs?si=3M6eZCQtXnW2tSUd&t=866
by Caitlynmeeks on 12/14/24, 7:31 AM
by defenestrated on 12/14/24, 7:57 AM
by skissane on 12/14/24, 6:57 AM
by tablatom on 12/14/24, 8:41 AM
Ideally voices that don’t have a vested interest.
For example, give a superintelligence some money, tell it to start a company. Surely it’s going to quickly understand it needs to manipulate people to get them to do the things it wants, in the same way a kindergarten teacher has to “manipulate” the kids sometimes. Personally I can’t see how we’re not going to find ourselves in a power struggle with these things.
Does that make me an AI doomer party pooper? So far I haven’t found a coherent optimistic analysis. Just lots of very superficial “it will solve hard problems for us! Cure disease!”
It certainly could be that I haven’t looked hard enough. That’s why I’m asking.
by melvinmelih on 12/14/24, 7:36 AM
by thih9 on 12/14/24, 6:51 AM
by ChumpGPT on 12/14/24, 6:47 AM