by DalasNoin on 1/5/25, 1:40 PM with 112 comments
by 101008 on 1/5/25, 8:58 PM
It came from verify@verification.metamail.com, with alert@nofraud.com cc. All red flags for phishing.
I googled it because it had all the purchase information, so unless a malicious actor infiltrated Meta servers, it has to be right. And it was, after googling a bit. But why do they do such things?i would expect better from Meta.
by joe_the_user on 1/5/25, 7:57 PM
The situation involves institutions happy to opaque links to email as part of their workflow. What could change this? All I can imagine is state regulation but that also is implausible.
by hibikir on 1/5/25, 8:23 PM
AI now means much less skilled people can be as good as she was. Karla as a Service. We are doomed.
by LeftHandPath on 1/5/25, 8:04 PM
I have to wonder if, in the near future, we're going to have a much higher perceived cost for online social media usage. Problems we're already seeing:
- AI turning clothed photos into the opposite [0]
- AI mimicking a person's voice, given enough reference material [1]
- Scammers impersonating software engineers in job interviews, after viewing their LinkedIn or GitHub profiles [2]
- Fraudsters using hacked GitHub accounts to trick other developers into downloading/cloning malicious arbitrary code [3]
- AI training on publicly-available text, photo, and video, to the surprise of content creators (but arguably fair use) [4]
- AI spamming github issues to try to claim bug bounties [5]
All of this probably sounds like a "well, duh" to some of the more privacy and security savvy here, but I still think it has created a notable shift from the tech-optimism that ran from 2012-2018 or so. These problems all existed then, too, but with less frequency. Now, it's a full-pressure firehose.
[0]: https://www.wsj.com/politics/policy/teen-deepfake-ai-nudes-b...
[1]: https://www.fcc.gov/consumers/guides/deep-fake-audio-and-vid...
[2]: https://connortumbleson.com/2022/09/19/someone-is-pretending...
[3]: https://it.ucsf.edu/aug-2023-impersonation-attacks-target-gi...
[4]: https://creativecommons.org/2023/02/17/fair-use-training-gen...
[5]: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...
by serviceberry on 1/5/25, 9:19 PM
If I receive a unique / targeted phishing email, I sure will check it out to understand what's going on and what they're after. That doesn't necessarily mean I'm falling for the actual scam.
by terribleperson on 1/5/25, 7:44 PM
Social engineering (and I include spearphishing) has always been powerful and hard to mitigate. Now it can be done automatically at low cost.
by bennythomsson on 1/5/25, 9:07 PM
by cluckindan on 1/5/25, 7:45 PM
If it was done without target consent, it would certainly be unethical.
by alisonatwork on 1/6/25, 3:10 AM
by mtrovo on 1/5/25, 11:28 PM
by ddmf on 1/6/25, 3:21 PM
employer.git.pension-details.vercell.app
Why do these companies make this stuff so hard!?
by webdevladder on 1/5/25, 10:01 PM
- 3 new email chains from different sources in a couple weeks, all similar inquiries to see if I was interested in work (I wasn't at the time, and I receive these very rarely)
- escalating specificity, all referencing my online presence, the third of which I was thinking about a month later because it hit my interests squarely
- only the third acknowledged my polite declining
- for the third, a month after, the email and website were offline
- the inquiries were quite restrained, having no links, and only asking if I was interested, and followed up tersely with an open door to my declining
I have no idea what's authentic online anymore, and I think it's dangerous to operate your online life with the belief that you can discern malicious written communications with any certainty, without very strong signals like known domains. Even realtime video content is going to be a problem eventually.
I suppose we'll continue to see VPN sponsorships prop up a disproportionate share of the creator economy.
In other news Google routed my mom to a misleading passport renewal service. She didn't know to look for .gov. Oh well.
by Retr0id on 1/5/25, 9:22 PM
by ttul on 1/6/25, 1:46 AM
by nostradumbasp on 1/5/25, 11:21 PM
That's where we're headed. Bad actors paying for DDoS attacks is more or less mainstream these days. Meanwhile the success rate for phishing attacks is incredibly high and the damage is often immense.
Wonder what the price for AI targeted phishing attacks would be? Automated voice impersonation attempts at social engineering, smishing, e-mails pretending to be customers, partners, etc. I bet it could be very lucrative. I could imagine a motivated high-schooler pulling off each of those sorts of "services" in a country with lax enough laws. Couple those with traditional and modern attack vectors and wow it could be really interesting.
by 015a on 1/5/25, 8:59 PM
by tylerchilds on 1/6/25, 1:43 AM
by TechDebtDevin on 1/5/25, 7:34 PM
by justinl33 on 1/5/25, 8:54 PM