by wskinner on 7/11/23, 2:28 PM with 214 comments
by waihtis on 7/11/23, 3:27 PM
by waffletower on 7/11/23, 4:22 PM
by FrustratedMonky on 7/11/23, 4:04 PM
But the one that was just released about AI for autonomous drones, is pretty shocking.
The military side is farther along than I thought, AND, you know what is appearing in a Netflix documentary is old, it isn't the latest things that are still secret.
In any case, why it relates to this article.
We are in a race with other nation states, so all of this discussion about 'curtailing' or limiting AI, is all just grand standing, smoke screen.
Nothing will slow down development because other countries are also developing AI, and everyone firmly believes they have to be first or will be wiped out. Hence no one will slow down.
by valine on 7/11/23, 3:28 PM
The existential threat argument is a ridiculous notion to begin with. Pretending open source LLMs don’t already exist makes the argument even more silly.
by 300bps on 7/11/23, 3:20 PM
https://www.theverge.com/2023/7/9/23788741/sarah-silverman-o...
Imagine companies thinking they can scrape all the world's information for free and then package it up and sell it.
by phillipcarter on 7/11/23, 4:27 PM
> There are interventions we can make now, including the regulation of “high-risk applications” proposed in the EU AI Act. By regulating applications we focus on real harms and can make those most responsible directly liable. Another useful approach in the AI Act is to regulate disclosure, to ensure that those using models have the information they need to use them appropriately.
by guy98238710 on 7/11/23, 4:18 PM
We wish. AI is hardware-limited and hardware is not moving fast. We are very, very far from matching raw compute power of the human brain. Robots are even more limited compared to human body.
by waffletower on 7/11/23, 4:25 PM
by guy98238710 on 7/11/23, 5:14 PM
by jononomo on 7/11/23, 3:24 PM
Also, many people seem to be mistaking "artificial intelligence" for "actual intelligence".
You know that these LLMs are not actually intelligent, right? Right???
by arisAlexis on 7/11/23, 3:43 PM
This is entirely false, sounds like Andersen that seems incapable of understanding that agency and goals will be a thing.
by cubefox on 7/11/23, 3:59 PM
https://www.lesswrong.com/posts/6untaSPpsocmkS7Z3/ways-i-exp...
by tyre on 7/11/23, 3:38 PM
The capabilities of current generation AI—LLMs + audio/image/video generative models—are far enough already that wide-scale distribution is extremely dangerous.
Social media allowed for broadcasting from 1:many and 1:1 with troll armies, but LLMs are on a whole other planet of misinformation. A scalable natural language interface can target people at the one-to-one conversation level and generate misinformation ad hoc across text, voice, images, and video.
The trouble with this approach is that harm will rack up way faster than liability can catch up, with irreversible and sometimes immeasurable long term effects. We have yet to contend with the downstream effects of social media and engagement hacking, for example, and suing Facebook out of existence wouldn't put a dent in it.
Current generation AI has enough capabilities to swing the 2024 US presidential election. It can be used today for ransom pleas trained on your childrens' instagram posts. It doesn't seem farfetched that AI could start a war. Not because it's SkyNet, but because we've put together the tools to influence already-powerful-enough human forces into civilization-level catastrophe.
by Tenoke on 7/11/23, 5:31 PM
Power imbalance is not 'much worse' than extinction.
It's very clear how people start with the conclusion - "openess and progress have been good and will keep being the best choice" and then just contort their thinking and arguments to match the conclusion without really critically examining the counter arguments.
by earthboundkid on 7/11/23, 3:56 PM
Okay, GLHF.
by kramerger on 7/11/23, 3:17 PM
by zeryx on 7/11/23, 3:17 PM
2 years ago I thought we were decades away from general purpose AI, this is coming from a guy who implemented transformer models on day 5. My time estimates have been proven very wrong.
I'm equally worried about the value of white collar labour dropping to near zero in my lifetime, and the ultimate centralization of power. The movie Elysium seems less and less science fiction every day.
I am happy politicians and think-tanks are taking this seriously, you should be too.
by stOneskull on 7/11/23, 4:19 PM
by arisAlexis on 7/11/23, 3:45 PM
This is more utopian than communism even.
"There will still be Bad Guys looking to use them to hurt others or unjustly enrich themselves. But most people are not Bad Guys"
Failure to understand that even one by guy can kill everyone with a sufficiently advanced AGI