by smt88 on 8/25/21, 6:56 PM
This simple site is a far better demo and explanation of the extreme danger of Apple's proposal than any of the long articles written about it.
Thank you for caring enough to put this together and publish it.
by umvi on 8/25/21, 7:14 PM
Seems like this CSAM tech could be super useful in China for detecting winnie the pooh or other evidence of thought crime against the regime. Even if Apple doesn't end up rolling it out, I'm sure Huawei is taking careful notes.
by firebaze on 8/25/21, 7:16 PM
by nullc on 8/25/21, 10:36 PM
I think Apple may have figured out that the best way to get people to accept backdoored encryption is simply to not call it backdoored, and claim that its a privacy feature...
...as if having a trillion dollar corporation playing batman and going on a vigilante crusade to scan your private files is a situation we should already be comfortable with.
by drexlspivey on 8/25/21, 7:15 PM
Cool! Now do one where the user uploads the image and it tweaks it to find a collision on the fly.
by CodesInChaos on 8/25/21, 7:34 PM
Since the target image is chosen, this is a (second) preimage, not merely a collision.
by spoonjim on 8/25/21, 9:31 PM
Imagine hiring a young-looking 18-year old model to duplicate the photos in the database and create a hash collision. Now you have a photo which is perfectly legal for you to possess but can rain down terror on anyone you can distribute this file to.
by advisedwang on 8/25/21, 8:49 PM
The argument against this tech is a slippery slope argument - that this technology will eventually be expanded to prevent copyright infringement, censor obscenity, limit political speech or other areas.
I know this is a controversial take (in HN circles), but I no longer believe this will happen. This kind of tech has existed for a while, and it simply hasn't happened that it's been mis-applied. I now think that this technology has proved to be an overall net good.
by spullara on 8/25/21, 7:16 PM
So 30+ images get flagged and they run it against the real CSAM database and it doesn't match? Or let's say someone is able to somehow make an image that gets flagged by both and someone looks at the image and it isn't CSAM. Nothing happens.
by WesolyKubeczek on 8/25/21, 7:23 PM
Each image on the left has a blob vaguely similar to the highlights in the dog image on the right. Likely the "perceptual" algorithm isn't "perceiving" contrast the same way human eyes and brains do.
by aliabd on 8/25/21, 7:46 PM
by DavideNL on 8/25/21, 8:07 PM
> For example, it's possible to detect political campaign posters or similar images on users' devices by extending the database.
So who controls the database?
by seanbarry on 8/25/21, 8:03 PM
Can somebody please explain to me how one can go about finding images that have collision hashes? Or how you can create an image to have a specific hash?
by spuz on 8/25/21, 7:10 PM
Apple have stated that they will make the database of hashes that their system uses auditable by researchers. Does anyone know if that has happened yet? Is it possible to view the database and if so, in what form? Can the actual hashes be extracted? If so then that would obviously open up the kind of attack described in the article. Otherwise, it would be interesting to know how Apple expects the database to be auditable without revealing the hashes themselves.
by nonbirithm on 8/25/21, 7:46 PM
Irrespective of whether or not NeuralHash is flawed, should Apple scan user data or should they not?
If not, what is going to convince them to stop at this point?
I believe that they should scan user data in some capacity, because this is about data that causes harm to children.
However, I believe that they should not run the scan on the device, because that carries significant drawbacks for personal privacy.
by 1vuio0pswjnm7 on 8/25/21, 7:39 PM
[deleted]
by slg on 8/25/21, 7:31 PM
Now let's create one for the hash matching that Google, Microsoft, and other cloud providers use.
If your problem with Apple's proposal is the fact they do hash matching (rather than the system is run on your device), why is the criticism reserved for Apple instead of being directed at everyone who does hash matching to find CSAM? It seems like a lot of the backlash is because Apple is being open and honest about this process. I worry that this will teach companies that they need to hide this type of functionality in the future.