by zonovar on 9/14/22, 9:47 PM with 11 comments
by SilverBirch on 9/14/22, 10:34 PM
Let's say that this machine breaks out and decides to take over the world to acheive whatever stupid task we've given it (I'm yet to see an AI ethics paper that doesn't exclusively deal in stupid tasks), do we really think that the AI is going to figure out it's going to need to ensure it stabilizes the power grid before it starts exponentially drawing power to collect buttons or whatever it is?
What is much more likely from our experience is that a basic AGI is going to do something really dumb. And then when we reboot it it'll do something dumb 1000x more times. And maybe eventually it'll do something almost as smart as us. This is what we term childhood.
I think a core part of these ethical papers is that you suppose an AI is generally intelligent, and that it is much more intelligent than anything plausible and never needs to learn from experience. Let me tell you, I'm pretty generally intelligent and I've not taken over the world more times than I can count.
by rini17 on 9/14/22, 11:54 PM
by pyinstallwoes on 9/15/22, 4:26 AM