by imichael on 1/14/25, 3:36 PM with 8 comments
by caseyy on 1/14/25, 9:56 PM
At least precedent is building and hopefully soon unscrupulous use of facial recognition AI by the police will be enough to convince courts to have a serious second look at the evidence. But the people who are affected now may be imprisoned falsely, that is awful.
It reminds me of that case where police somewhere in the US arrested a person who Google told them was in proximity of a crime at least once, without any real evidence. It is baffling they use these technological 8-balls as fact machines. An 8-ball would be more energy efficient and have better ergonomics, at this point. I hope they are not considering it, but my Llama2 says they are and that’s a fact after all by their measure, isn’t it?
by JohnMakin on 1/14/25, 8:03 PM
by haswell on 1/14/25, 9:38 PM
The harm conversation needs to be refocused on these less sexy but nevertheless real emerging problems.
As these tools make their way into more and more aspects of life, I can’t help but feel like new laws need to exist so that a “don’t use this for xyz high risk purpose” warning actually has teeth.
by cyanydeez on 1/15/25, 2:43 AM