from Hacker News

Ask HN: How could setting an AI's goal to be “increase human autonomy” go awry?

by evangow on 12/2/17, 4:18 PM with 2 comments

I read Nick Bostrom's book "Superintelligence: Paths, Dangers, Strategies" a while back and found the problems in trying to set a goal for AI to be interesting/difficult. He outlines various ways the goals could be misconstrued by the AI, which eventually leads to human extinction. I think setting the goal to be "to increase human autonomy" might get around some of these problems. I'm interested to hear how people think it could go awry though.
  • by schoen on 12/2/17, 4:27 PM

    I guess a natural question is how to define and measure human autonomy.

    If it's the autonomy of each individual human, increasing it without bound will cause existing societies to fall apart quickly (which is potentially fine under some ethical theories), and could create severe danger for other humans because people can use their enhanced abilities to fight and harm each other.

    If it's the autonomy of humanity as a whole, you have to define some way of aggregating preferences or determining the will of humanity as a whole -- already a significant challenge today.

  • by yorwba on 12/2/17, 4:32 PM

    Humans are obviously most autonomous if they are prevented from contact with the rest of the world and then left to fend for themselves.