by fromdoon on 3/3/14, 4:12 AM with 59 comments
by ggreer on 3/3/14, 5:28 AM
For a much deeper treatment of this subject, I recommend Global Catastrophic Risks, edited by Nick Bostrom and Milan Ćirković. The overarching point is straightforward (see the paragraph above), but the details of each threat are interesting on their own.
1. http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom...
by lutusp on 3/3/14, 6:48 AM
Are these people all completely ignorant of evolution and science? No matter what happens in the future, one or another species-obliterating risk is a certainty. Here's why:
1. Our species has existed for about 200,000 years.
2. On that basis, and given our present knowledge of biology and evolution by natural selection, it's reasonable to assume that, within another 200,000 years, we will have been replaced by another species who either successfully competed with us, or into whom we simply evolved over time.
3. Human beings are a note, perhaps a measure, in a natural symphony. We're not the symphony, and we're certainly not the reason the music exists.
4. Based on the above estimate, there will be 10,000 more human generations, after which our successors will no longer resemble modern humans, in the same way that our ancestors from 200,000 years ago did not resemble us.
5. We need to get over ourselves -- our lives are a gift, not a mandate.
6. I plan to enjoy my gift, and not take myself too seriously. How about you?
by ilaksh on 3/3/14, 6:34 AM
However, if (when) super-intelligent artificial general intelligence "arrives", that pretty much makes normal unaugmented humans the relative equivalent of chimps. It means that our opinions and actions are no longer historically relevant. We will be, relatively speaking, obsolete mentally disabled people running along doing relatively stupid things. http://www.youtube.com/watch?v=I_Juh7Xh_70
In order for our opinions and abilities to actually matter relative to the super-doings and super-thoughts of the new AIs, we really _must_ have this magical nano-dust or something that integrates our existing homo sapien 1.0 brains with some type of artificial super-intelligence.
So that is what I am worried about -- will the super-AIs show up before the high bandwidth nano-BCIs (brain-computer interfaces) or before I can afford them.
Of course, in the long run there may not be a good reason for AIs to use regular human bodies/brains at all and so that may be phased out for subsequent generations.
by sdrothrock on 3/3/14, 5:30 AM
For example, through a technological singularity or even just through accumulated gene therapy over generations.
by Zigurd on 3/3/14, 6:39 AM
by midnitewarrior on 3/3/14, 6:24 AM