by skwirl on 6/4/24, 4:38 PM
It’s pretty interesting how widely divergent views of the current state and promise this technology is. The dominant view here on HN seems to be that generative models are overhyped toys that burn massive amounts of energy and investment capital with little signs of promise. Meanwhile you have these AI researchers who are warning about “the loss of control of autonomous AI systems potentially resulting in human extinction.” What are we to make of this?
by moondistance on 6/4/24, 4:25 PM
by commandlinefan on 6/4/24, 4:24 PM
Seems like a pretty transparent call for regulatory capture with no specifics.
by dpflan on 6/4/24, 4:30 PM
by _tk_ on 6/4/24, 4:19 PM
by djyaz1200 on 6/4/24, 4:54 PM
What are they going to do to harm us, and how are they going to do it? I'm seriously asking.
by ChrisArchitect on 6/4/24, 5:10 PM
by deadbabe on 6/4/24, 4:40 PM
Isn’t taking risks just a standard part of technological innovation?
by moondistance on 6/4/24, 4:29 PM
I'd like to build a chatbot arena with pro-humanity questions.
by ein0p on 6/4/24, 4:42 PM
Periodic reminder: at the moment, governments worldwide are literally developing killer robots which can’t disobey their orders. Your fears are comically misplaced.
by dmix on 6/4/24, 5:11 PM
I worry AI safety is going to be a huge political grift soon
by nilawafer on 6/4/24, 4:34 PM
When China & India form an alliance and send a peace keeping army to protect the former citizens of the fallen American Empire Megacorp, then the American public might have reason to have regrets about AI. But until then it’s just another distraction.