by chillycurve on 6/2/23, 1:46 AM with 63 comments
by IEnjoyTapNinja on 6/2/23, 9:12 AM
"He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human."
It is being established in the beginning of the story, that the drone needed confirmation from a human operator to attack a target, but no explanation is given how the drone would be able to kill its operator without his confirmation.
This is obviously absurd.
What I believe happened in reality: this was not a simulation, but a scenario. Meaning a story written to test the behavior of soldiers in certain situations. The drone did not behave according to the decisions taken by an AI model, but according to the decisions taken by a human instructor, who was trying to get the trainees to think outside the box.
by arisAlexis on 6/2/23, 7:38 AM
by joegibbs on 6/2/23, 2:52 AM
by chillycurve on 6/2/23, 4:06 PM
Original story: https://web.archive.org/web/20230602014646/https://www.thegu...
Shame on The Guardian for not mentioning the retraction.
by vivegi on 6/2/23, 10:09 AM
It looks like this is a principal agent problem (https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble...):
The principal–agent problem refers to the conflict in interests and priorities that arises when one person or entity takes actions on behalf of another person or entity.
The same issues occur with self-driving cars where it is expected that the driver take over from the automation anytime (eg: driver wants to stop but AI wants to go or vice versa).by hourago on 6/2/23, 4:07 AM
> My suggestion was quite simple: Put that needed code number in a little capsule, and then implant that capsule right next to the heart of a volunteer. The volunteer would carry with him a big, heavy butcher knife as he accompanied the President. If ever the President wanted to fire nuclear weapons, the only way he could do so would be for him first, with his own hands, to kill one human being.
You can solve the problem by giving the decision to an AI... the AI will not even blink before killing the human and getting the codes. Nuclear war would come fast and swift.
by traveler01 on 6/2/23, 10:31 AM
Anything for a click these days?
by hedora on 6/2/23, 1:51 AM
by fennecfoxy on 6/2/23, 10:24 AM
The reward function should primarily be based on following the continued instructions of the handler, not taking the first instruction and then following it to the letter.
What's funny though, is that the model proved that it was adept at the task they gave it. Trying to kill the operator, then when adjusted pivoting to destroying the comms tower the operator used. That's still clever.
As per usual the problem isn't the tool, it's the tool using the tool. Set proper goals and train the model properly and it would work perfectly. I think weapons should always require a human in the loop, but the problem is that there'll be an arms-race where some countries (you know who) will ignore these principles and build fully autonomous no-human weapons.
Then, when our systems can't react fast enough to defend ourselves because they need a human in the loop, what will we do? Throw out our principles and engage in fully-autonomous weaponry as well? It's the nuclear weapons problem all over again...
by News-Dog on 6/2/23, 2:27 AM
“We must face a world where AI is already here and transforming our society,” he said.
“AI is also very brittle, ie, it is easy to trick and/or manipulate.
We need to develop ways to make AI more robust,
and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.”
by qbrass on 6/2/23, 5:11 AM
by philipkglass on 6/2/23, 6:38 PM
Malak by Peter Watts:
by Rebelgecko on 6/2/23, 7:23 AM
https://www.businessinsider.com/ai-powered-drone-tried-killi...
by more_corn on 6/4/23, 12:27 AM
It was a thought experiment. AKA an imagined scenario. No real person died. No AI has gone rogue.
by uninformed on 6/2/23, 1:17 PM
by Aerbil313 on 6/2/23, 12:00 PM
"... a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission."
by King-Aaron on 6/2/23, 6:18 AM
by ftxbro on 6/2/23, 1:50 AM
by exabrial on 6/2/23, 12:21 PM
by pjmlp on 6/2/23, 4:35 AM
Now it needs the directives.
by sillywalk on 6/2/23, 1:48 AM