from Hacker News

US military drone controlled by AI "killed" its operator during simulated test

by chillycurve on 6/2/23, 1:46 AM with 63 comments

  • by IEnjoyTapNinja on 6/2/23, 9:12 AM

    The Guardian hasn't done any due diligence and is spreading fake news based on an interviewer who seems to have misunderstood the context of the exercise.

    "He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human."

    It is being established in the beginning of the story, that the drone needed confirmation from a human operator to attack a target, but no explanation is given how the drone would be able to kill its operator without his confirmation.

    This is obviously absurd.

    What I believe happened in reality: this was not a simulation, but a scenario. Meaning a story written to test the behavior of soldiers in certain situations. The drone did not behave according to the decisions taken by an AI model, but according to the decisions taken by a human instructor, who was trying to get the trainees to think outside the box.

  • by arisAlexis on 6/2/23, 7:38 AM

    The denial of x-risk is crazy here. This is literally a demo of what researchers like Hinton and Bengio are afraid of but most comments don't believe it happened and the other think that it's not a big deal. The human psyche never ceases to amaze.
  • by joegibbs on 6/2/23, 2:52 AM

    That seems incredibly advanced - how does the military already have AI that can reason that a comms tower should be destroyed to prevent it from receiving instructions like that?
  • by chillycurve on 6/2/23, 4:06 PM

    This has been fully retracted: https://www.theguardian.com/us-news/2023/jun/02/us-air-force...

    Original story: https://web.archive.org/web/20230602014646/https://www.thegu...

    Shame on The Guardian for not mentioning the retraction.

  • by vivegi on 6/2/23, 10:09 AM

    What is the role of the AI and what is the distinct role of the operator?

    It looks like this is a principal agent problem (https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble...):

      The principal–agent problem refers to the conflict in interests and priorities that arises when one person or entity takes actions on behalf of another person or entity.
    
    The same issues occur with self-driving cars where it is expected that the driver take over from the automation anytime (eg: driver wants to stop but AI wants to go or vice versa).
  • by hourago on 6/2/23, 4:07 AM

    As bad as nuclear weapons are, they still have a human being behind.

    > My suggestion was quite simple: Put that needed code number in a little capsule, and then implant that capsule right next to the heart of a volunteer. The volunteer would carry with him a big, heavy butcher knife as he accompanied the President. If ever the President wanted to fire nuclear weapons, the only way he could do so would be for him first, with his own hands, to kill one human being.

    You can solve the problem by giving the decision to an AI... the AI will not even blink before killing the human and getting the codes. Nuclear war would come fast and swift.

  • by traveler01 on 6/2/23, 10:31 AM

    This title is super sensationalist. Any distracted reader will think that the drone killed an actual human being, which is a lie since the claims was that it happened in a virtual environment.

    Anything for a click these days?

  • by hedora on 6/2/23, 1:51 AM

    [WONTFIX] Works as designed.
  • by fennecfoxy on 6/2/23, 10:24 AM

    Definitely just bad model/test conditions/scoring design. Of course the military is using home-grown fisher price models.

    The reward function should primarily be based on following the continued instructions of the handler, not taking the first instruction and then following it to the letter.

    What's funny though, is that the model proved that it was adept at the task they gave it. Trying to kill the operator, then when adjusted pivoting to destroying the comms tower the operator used. That's still clever.

    As per usual the problem isn't the tool, it's the tool using the tool. Set proper goals and train the model properly and it would work perfectly. I think weapons should always require a human in the loop, but the problem is that there'll be an arms-race where some countries (you know who) will ignore these principles and build fully autonomous no-human weapons.

    Then, when our systems can't react fast enough to defend ourselves because they need a human in the loop, what will we do? Throw out our principles and engage in fully-autonomous weaponry as well? It's the nuclear weapons problem all over again...

  • by News-Dog on 6/2/23, 2:27 AM

    The Takeaway; Write better Code!

    “We must face a world where AI is already here and transforming our society,” he said.

    “AI is also very brittle, ie, it is easy to trick and/or manipulate.

    We need to develop ways to make AI more robust,

    and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.”

  • by qbrass on 6/2/23, 5:11 AM

    It's basically the plot to Peter Watts 'Malak' except in that story, the drone's decision to not engage targets was being overridden instead of the other way around.
  • by philipkglass on 6/2/23, 6:38 PM

    This story appears to be untrue. If you have a few minutes to spare read this better story, intentionally fictional, about a military drone that kills its operators.

    Malak by Peter Watts:

    https://www.rifters.com/real/shorts/PeterWatts_Malak.pdf

  • by Rebelgecko on 6/2/23, 7:23 AM

    Worth noting that the story is possibly apocryphal or exaggerated for effect

    https://www.businessinsider.com/ai-powered-drone-tried-killi...

  • by more_corn on 6/4/23, 12:27 AM

    This guy came out and said he misspoke. He imagined a situation where an AI might kill its handler so it could better complete the mission.

    It was a thought experiment. AKA an imagined scenario. No real person died. No AI has gone rogue.

  • by uninformed on 6/2/23, 1:17 PM

    This article gave me a good laugh. Just proves how much AI is subservient to our will. We just have to be really clear about what we mean. I expect everyone's communication skills to shoot up this century.
  • by Aerbil313 on 6/2/23, 12:00 PM

    Hahaha, the long awaited news line of AI has finally came! (even though it's not real, probably)

    "... a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission."

  • by King-Aaron on 6/2/23, 6:18 AM

    Don't Create the Torment Nexus
  • by ftxbro on 6/2/23, 1:50 AM

    i saw somewhere they were saying it looked too 'on-the-nose' alignment hazard and the simulated test was bait to demonstrate how such a thing can be possible
  • by exabrial on 6/2/23, 12:21 PM

    This is a long stretch from what actually happened
  • by pjmlp on 6/2/23, 4:35 AM

    So basically we achieved ED-209 AI.

    Now it needs the directives.

  • by sillywalk on 6/2/23, 1:48 AM

    Not literally