by sounds on 9/22/24, 3:49 AM with 446 comments
by ryzvonusef on 9/22/24, 8:26 AM
My country is already has blasphemy lynching mobs based on the slightest perceived insult, real or imagined. They will mob you, lynch you, burn your corpse, then distribute sweets while you family hide and issue video messages denouncing you and forgiving the mob.
And this was before AI was easy to access. You can say a lot of things about 'oh backward countries' but this will not stay there, this will spread. You can't just give a toddler a knife and then blame them for stabbing someone.
Has nothing to do with fame, with security, with copyright. This will get people killed. And we have no tools to control this.
https://x.com/search?q=blasphemy
I fear the future.
by ummonk on 9/22/24, 4:01 AM
by wwweston on 9/22/24, 4:03 AM
by adityaathalye on 9/22/24, 7:00 AM
One can't help but wonder what theft even means any more, when it comes to digital information. With the (lack of) legal precedent, it feels like the wild wild west of intellectual property and copyright law.
Like, if even a superstar like Scarlett Johansson can only write a pained letter about OpenAI's hustle to mimic her "Her" persona, what can the comparatively garden-variety niche nerd do?
Like Geerling, feel equally sad / angry / frustrated, but merely say "Please for the love of all that is good, be nice and follow an honour code.".
by cranium on 9/22/24, 8:09 AM
by donatj on 9/22/24, 4:20 AM
by ei23 on 9/22/24, 12:59 PM
by XorNot on 9/22/24, 4:41 AM
In about 5 years AI voices will be bespoke and more pleasant to listen to then any real human: they're not limited by vocal cord stress, can be altered at will, and can easily be calibrated by surveying user engagement.
Subtly tweaking voice output and monitoring engagement is going to be the way forward.
by surfingdino on 9/22/24, 6:23 AM
by thih9 on 9/22/24, 6:08 AM
Same as it happens with unauthorized use of someone’s images. And platforms and their moderation teams have processes in place to report and remove that. Looks like we need something similar for voice.
by singleshot_ on 9/22/24, 3:35 PM
You can absolutely positively find a free lawyer if your issue is interesting enough.
This is the most interesting issue of our day.
by vonnik on 9/22/24, 12:03 PM
https://techcrunch.com/2024/09/19/here-is-whats-illegal-unde...
Not sure if those laws apply to Jeff tho, as they concern porn, politics and employer contracts.
by benterix on 9/22/24, 10:29 AM
by GaggiX on 9/22/24, 4:06 AM
Make a video, say what you think, get views, and probably put more pressure on Elecrow to respond.
by mediumsmart on 9/22/24, 4:49 AM
by cityzen on 9/22/24, 3:52 PM
Since that guy was CEO of Google it’s all good right???
https://www.theverge.com/2024/8/14/24220658/google-eric-schm...
by at_a_remove on 9/22/24, 5:16 AM
It looks like we're heading in that direction.
by paganel on 9/22/24, 9:08 AM
[1] https://old.reddit.com/r/redscarepod/comments/1fmiiwt/which_...
by rldjbpin on 9/23/24, 8:35 AM
IANAL and not sure about regional precedence on these topics, but there are plenty of ads where lookalikes or voice actors are used to use someone's likeness. they are mostly in satire, but there is yet to be a case where there was a litigation over this or prior approval needed.
we have ai-based voice abuse in the political sphere, and where there was only one legislation for banning the use in voice calls for one country (https://news.ycombinator.com/item?id=39304736), another country actively used the same underlying tech to aid their own rallies (https://news.ycombinator.com/item?id=40532157).
the tools are here to stay, but what is fair use needs to be defined more than ever.
by sandreas on 9/22/24, 9:13 AM
Although it was not too hard to create I believe making it easier is something i don't like to achieve...
I hate to say this but ruining a narrators existence with AI seems to get easier every day.
by segmondy on 9/22/24, 3:19 PM
by LegitShady on 9/22/24, 6:22 AM
by Corrado on 9/23/24, 7:05 AM
by eth0up on 9/22/24, 11:47 AM
She has/had two numbers; magic jack and google. When I tried to call her, the magic jack was no longer in service and google said something about "unavailable".
I reached out to my cousin (my aunt's daughter) to inquire. I was told her number (and perhaps other things) had been "hacked", whatever that means. She had recently broken her hip and was in a hospital recovering.
With this on my mind, I received a call (from the google number), strangely, while processing files with GPT. My skepticism was primed and ready, possibly making me paranoid. However, I did my due diligence and asked dozens of questions, mostly boring things that she typically wouldn't have patience for. Sometimes she'd reply with a reasonable answer and sometimes not, which made it difficult to evaluate. Toward the end, I asked where she was. She said, with an awkward tempo "I'm at home, in Cuenca", which I found odd because she'd normally just say she was at home, period. I then pressed her to tell me where she was before she returned home. She said she didn't understand. I rephrased the question, stating that it was a simple inquiry, eg "where were you before going home?" She said "this is getting too strange and confusing " and killed the call.
I notified my cousin, telling her I thought something was suspicious, still cognizant of all the characteristics one would expect from a 90 year old recovering from a serious injury. My cousin might, technology wise, be in AOL territory.
About 5 days later, I received a call from my aunt, on the google line. This time,I was more passive and cautious, but again, asked dozens of boring questions to probe the situation. I was surprised by both her ability to answer certain questions and also her inability to answer some questions. I tried to ask questions on topics we'd never discussed, in case the line had been tapped for a long time and referencing was established by an imposter. I had begun to suspect I had been paranoid. But several aspects were burning me: 1) typing noises in the background 2) Shatneresque pauses for nearly every reply 3) refusal to answer some specific questions.
At the end of our apparent conversation, I asked her to do a very serious favor for me: send me a selfie, with one hand making the thumbs up gesture. She replied "I'll send you a photo of my passport ". I replied "that's stupid, ridiculous and serves no purpose. Don't do that. Understand? Do NOT send me a passport photo. I'm asking you something very important. Do exactly what I asked. Will you do this?" Her reply: "yes. What is your email address?" This was odd. I told her she already knew and it's the same one she'd had for years. She asked that I tell her anyway. Ok, 90 years old, traumatic injury, possible prescription drugs... "It's my full name @ xyzmail com". We killed the call.
I immediately called my cousin and told her of my suspicions, including some my aunt's babbling about all her finances and accounts being inaccessible. She said that was strange because she just deposited 8k into her account. Meanwhile, a notification appears in the phone, an email from my aunt. It's a photo of her passport.
Having no authority in this situation, but plenty well annoyed, I immediately jumped on a real computer and ran the photo through exiftool. The photograph was taken in 2023 and it was August of 2024. I then grabbed the geo coordinates (cryptically presented in exiftool) and with some effort, geolocated the image to right on top of her former residence, in Cuenca.
I still don't know WTF is going on and my cousin thinks I'm a dingbat. But what I know for sure, is this is an age where such things are plausible enough and will soon be inevitable. The way I think may be deranged, but I truly don't even know if my aunt still exists. But I can have a pretty compelling conversation, either with her, or something strongly resembling her, minus the Shatneresque pauses, typing noises and selective amnesia.
by t0bia_s on 9/22/24, 6:42 PM
Regulating prolong adoption and take resources.
by djoldman on 9/22/24, 12:42 PM
I don't have a dog in this fight but just to be clear, OpenAI has stated that they paid a voice actor to create the voice ("Sky") that sounds like Scarlett Johanssen. There was no "cloning" or "stealing" (that they say).
https://openai.com/index/how-the-voices-for-chatgpt-were-cho...
by 4ndrewl on 9/22/24, 8:59 AM
by veunes on 9/22/24, 3:52 PM
by rishikeshs on 9/22/24, 9:28 AM
Is that some sort of a coat of arms?
by oehpr on 9/22/24, 9:15 PM
1. Why clone Jeff's voice?
When I was messing with stable diffusion using Automatic1111's interface, I noticed it came with a big list of artists to add to the prompt to stylize the image in some way. There was a big row in the media about ai art reproducing artists work and many artists came forward feeling it was a personal attack. But... I mean the truth is more general than that. When I pressed a button to insert a random name into a prompt, my goal was not "yes give me this person's art for free", it was "style this somehow".
I wasn't personally interested in any particular artist, I honestly would have preferred a bunch of sliders.
Jeff here is clearly a good speaker. That's a practiced talent and voice actors exist because it's hard. Elecrow wanted a voice over and they wanted it to be as good as they could make it. Jeff is very good. So did they want Jeff?
I think what they really wanted was a good and cogent narration with the tenor of a person. Not a machine making noises that sound like english. If they had an easy way to get that, we wouldn't be talking about it here.
2. What function does copyright serve?
Well. I think a reasonable argument would be that if people were able to reproduce your work for free, you would quickly find yourself without a monetary incentive to make more of it.
So. What happens if you combine answer 1 with answer 2?
I think it leads to: "We should consider making it illegal to automatically reproduce the work of an artisan.", you know, the luddic argument. An argument that has been perceived to be, more or less, settled.
So it seems to me: That for individuals, harms matter, and for society, it doesn't.
by moffkalast on 9/22/24, 9:23 AM
Most likely all existing youtubers will have complete voice and video digital clones made out of them. Then you can also tune an LLM on their scripts and it'll respond in the same character as well.
In theory you could also bring back ones who are dead, which would be very interesting in a historical sense. Like if we had hundreds of hours of Napoleon talking in front of a camera, it would be trivial to recreate a digital version of him for anthropologic study, maybe even having various figures debate things with each other. That's what historians a century later after we all die will be able to do with impunity.
by swag314 on 9/22/24, 8:00 AM
by m3kw9 on 9/23/24, 6:54 PM
by znpy on 9/22/24, 1:43 PM
We already had fake news and organizations willingly spread fake news.
We had clearly fake pictures and people believing that.
Flat-earthers, no-vax and whatever.
This is just another brick in the wall.
by gyudin on 9/22/24, 6:44 AM
by golol on 9/22/24, 6:49 AM
There is absolutely zero evidence for this. I find it infuriating that this keeps being stated as a fact. So they go and hire a voice actor and clearly use her voice to train, but then they also scrape Scarlett Johansson from youtube and splice it into the training data to make the voice a bit more like hers? Really does that sound realistic?
by scotty79 on 9/22/24, 7:40 AM
Except that never happened and the voice belonged to a completely different voice actress and Scarlett Johanssen had exactly zero right to prevent this person from making money as a voice actress lending it to AI.
These complaints remind me a little bit of the story that a man complained that his photo was used to illustrate the article about how all hipsters look the same and it eventually turned out it wasn't his photo.
by cr3cr3 on 9/22/24, 6:25 AM
by carmackfan on 9/22/24, 4:37 AM
by meiraleal on 9/22/24, 4:06 AM