from Hacker News

Ask HN: Strategies against AI voice and video scams?

by asar on 10/3/24, 10:19 AM with 1 comments

With the advancements in generative AI, especially for voice and video, I've been wondering for a while how to effectively protect against scams. For now I feel like I can personally still tell that a video or audio is generated/fake, but I'm getting increasingly worried that as these things develop it will become impossible to identify fakes.

What I'm currently thinking is to establish a code word in my family to at least protect against the scenario where a caller claims to be me (it's so easy to train a voice on recordings nowadays). I was wondering if the HN Community can think of other ways to protect against this threat?

Looking at the recent realtime voice release of Open AI and combining it with Diffusion Models, the opportunities for scammers are becoming endless and I'm deeply worried that there are no real protections at this point.

  • by gus_massa on 10/4/24, 2:56 AM

    The problem is when they call at 3am in tears. The voice and story doesn't need to be accurate to be convincing.