from Hacker News

Arrested by AI: Police ignore standards after facial recognition matches

by imichael on 1/14/25, 3:36 PM with 8 comments

  • by caseyy on 1/14/25, 9:56 PM

    Wow, they are using AI as some sort of fact machine. We (the general public) already know this is extremely incompetent, but they don’t care.

    At least precedent is building and hopefully soon unscrupulous use of facial recognition AI by the police will be enough to convince courts to have a serious second look at the evidence. But the people who are affected now may be imprisoned falsely, that is awful.

    It reminds me of that case where police somewhere in the US arrested a person who Google told them was in proximity of a crime at least once, without any real evidence. It is baffling they use these technological 8-balls as fact machines. An 8-ball would be more energy efficient and have better ergonomics, at this point. I hope they are not considering it, but my Llama2 says they are and that’s a fact after all by their measure, isn’t it?

  • by JohnMakin on 1/14/25, 8:03 PM

  • by haswell on 1/14/25, 9:38 PM

    I’m not a very conspiracy minded person, and this comment is mostly aimed at the Sam Altman’s of the world, but when people talk about AI harms, especially harms in the “risk to all human life” category, I’m increasingly convinced that it’s an intentional misdirect away from the very real harms that are happening in front of us - right now.

    The harm conversation needs to be refocused on these less sexy but nevertheless real emerging problems.

    As these tools make their way into more and more aspects of life, I can’t help but feel like new laws need to exist so that a “don’t use this for xyz high risk purpose” warning actually has teeth.

  • by cyanydeez on 1/15/25, 2:43 AM

    Now everyone could get the minority in the wrong neighborhood treatment.