by hiddencache on 9/6/21, 10:33 AM with 52 comments
by ChrisMarshallNY on 9/6/21, 12:40 PM
Good article. One thing about fakes, is that they don't need to be super-high-quality, in many cases. They just need to be enough to reinforce a narrative to a receptive audience.
An example is that Kerry/Fonda fake. Just looking at it as a thumbnail on my phone, it was easy to see that it was a composite. Also, I have seen both photos, in their original contexts. They are actually fairly well-known images, in their own rights.
That didn't stop a whole lot of folks from thinking it was real. They were already primed.
The comment below, about using an AI "iterative tuner" is probably spot-on. It's only a matter of time, before fake photos, videos, and audio, are par for the course.
by tinus_hn on 9/6/21, 5:43 PM
These days you don’t even need to fake the photo, you can just attach the fake drama to a photo of something else and no one will bat an eyelid.
by hdm41bc on 9/6/21, 1:44 PM
by hk-im-ad on 9/6/21, 11:34 AM
by z5h on 9/6/21, 2:43 PM
This was after photographers seemed to not believe this was the case https://photo.stackexchange.com/q/86550/45128
In any case, detecting cropped photos could be a way to detect that something has been intentionally omitted after the fact.
by open-source-ux on 9/6/21, 6:34 PM
A mundane example: You're browsing a property website, look through the pictures, and then visit a property only to discover the rooms are tiny matchbox-sized spaces. They looked so much more spacious when you viewed them online. You're just discovered wide-lens-photography for real estate - purposely distorts or make a space look spacious.
A 'fake' news example: During the coronavirus lockdown, a Danish photo agency, Ritzau Scanpix, commisioned two photographers to use two different perspectives to shoot scenes of people in socially-distance scenarios. Were people observing the rules? Or did the type of lens (wide-angle and telephoto) intentionally give a misleadling impression?
The pictures are here - the article is in Danish, but the photos tell the story:
https://nyheder.tv2.dk/samfund/2020-04-26-hvor-taet-er-folk-...
by kkielhofner on 9/6/21, 1:53 PM
There are virtually endless ways to generate ("deepfake") or otherwise modify media. I'm convinced that we're (at most) a couple advancements of software and hardware away from anyone being able to generate or otherwise modify media to the point where it's undetectable (certainly by average media consumers).
This comes up so often on HN I'm beginning to feel like a shill but about six months ago I started working on a cryptographic approach to 100% secure media authentication, verification, and provenance with my latest startup Tovera[0].
With traditional approaches (SHA256 checksums) and the addition of blockchain (for truly immutable and third party verification) we have an approach[1] that I'm confident can solve this issue.
by thesz on 9/7/21, 5:16 PM
[1] https://docs.opencv.org/master/dc/dbb/tutorial_py_calibratio...
Aligning points on a photo outside of more-or-less linear center region will certainly result in crossing lines. Which we see in the alignment attempt there in the article - the points we align are close to center and close to edge (max distortion).
There is no mention of distortions in the entire article.
But some other points are interesting to think about.
by JacKTrocinskI on 9/6/21, 12:53 PM
by dang on 9/6/21, 2:20 PM
Signs that can reveal a fake photo - https://news.ycombinator.com/item?id=14670670 - June 2017 (18 comments)
by hwers on 9/6/21, 11:22 AM