by danm07 on 7/30/17, 9:04 PM with 2 comments
The most viable argument I've heard for it is the difficulty in providing all possible constraints to narrow the path of optimization (so as to accommodate human norms). I.e. Facebooks negotiating bot developing their own language.
And for people saying it's not an issue for the next decade, I don't get this argument. Saying this is not the equivalent of saying it's not dangerous at all. In fact, the same could be said for global warming, could it not?
Plz enlighten me
by w_t_payne on 7/30/17, 9:49 PM
I don't think we're likely to see skynet achieve sentience anytime soon ... But AI technologies (NLP; Machine Learning) are definitely enablers for people who do not necessarily wish us well.
For example, we are living in a world where advertising, influence and propaganda technologies are steadily and rapidly improving in effectiveness. We are not too far off the point (perhaps we are there already?) where influencing messages can be crafted automatically -- tailored for each individual by NLP and machine learning algorithms.
This puts a lot of power in the hands of those who are able and willing to buy that influence. Is this AI being a threat? I'm not sure ... but it is certainly an accomplice.
by howscrewedami on 7/30/17, 9:43 PM