by dthal on 9/16/23, 2:54 AM with 31 comments
by denton-scratch on 9/16/23, 10:13 AM
The entire purpose of the chooser system is to discriminate between people; they want to investigate only those people likely to be cheating. If they really want to avoid discrimination, then they chould be choosing who to investigate using a straw-poll.
They have laws against certain kinds of discrimination, e.g. on the basis of race or gender. If those facts are used as input to the chooser, then race- and gender-discrimination is inevitable. There's not usually any protection against discrimination for e.g. being short, or having red hair, or speaking with a regional accent; I have no idea how such characteristics are correllated with cheating on welfare claims.
by rnk on 9/16/23, 4:34 AM
by 0wis on 9/16/23, 8:38 AM
It’s already looking like a bad journalism piece in the first part :
” Being flagged for investigation can ruin someone’s life, and the opacity of the system makes it nearly impossible to challenge being selected for an investigation, let alone stop one that’s already underway. One mother put under investigation in Rotterdam faced a raid from fraud controllers who rifled through her laundry, counted toothbrushes, and asked intimate questions about her life in front of her children.”
Here the problem is not the algorithm, its the investigators.
Another ethical problem for me : the system of flagging in whole relied partly on anonymous tips from neighbors. I am not an expert but I feel more at ease about a system that rely on a selection algorithm + randomness than delation.
I think the problem was the processes around the algorithm not its existence itself. The journalist seems to assume during the whole piece that the algorithm will become the main/only way to identify fraudsters. If its the case, it’s terribly wrong because how are you training your algorithm then ?
Most of the time, the piece try to put the reader in an emotional state of fear and anger and is not at all doing any analysis, while faking it using a lot of numbers and graphs.
Sorry for the long rant but I am surprised that this came from Wired which I consider quite good on tech topics, and that its on HN 2nd page.
I am against government scoring and algorithms for legal / police cases precisely because it can be badly used by powerful people.
Am I the only one to feel that its not a good article ?
by 0xDEAFBEAD on 9/16/23, 8:36 AM
Humans seem more subject to bias than algorithms are. Algorithms only look at data, but humans are additionally vulnerable to stereotypes and prejudices from society.
Furthermore, using an algorithm gives voters an opportunity to have a debate regarding how best to approach a problem like welfare fraud.
Human judgment relies on bureaucrats who are often biased and unaccountable. It's infeasible for voters to audit every decision made by a human bureaucrat. Replacing the bureaucrat with an algorithm and inviting voters to audit the algorithm seems a heck of a lot more feasible.
I give the city of Rotterdam a lot of credit for the level of transparency they demonstrated in this article. If they want to be successful with algorithmic risk scores, I think they should increase the level of transparency even further. Run an open contest to develop algorithms for spotting welfare fraud. Give citizens or representatives information about the performance characteristics of various algorithms, and let them vote for the algorithm they want.
In the same way politicians periodically come up for re-election, algorithms should periodically come up for re-election too. Inform voters how the current algorithm has been performing, and give them the option to switch to something different.
by lozenge on 9/16/23, 7:40 AM
I think it is morally justifiable as a residency requirement, but not justifiable to let people live there without being able to receive government support.
I think it's a situation where the government want to be racist or at least xenophobic, the citizens agree, but the law prevents them. Accenture was drafted in to get around the law.
by friend_and_foe on 9/16/23, 4:30 AM
by croes on 9/16/23, 7:42 AM
Poor = suspicious
by nicbou on 9/16/23, 5:18 AM
Algorithms give the rank and file the option to defer all accountability to a machine. The algorithms make mistakes. No one gets blamed or fired for trusting it in the first place.