by mekpro on 5/5/25, 11:16 AM with 341 comments
by ndiddy on 5/5/25, 1:26 PM
> At times, it sounded like the case was the authors’ to lose, with [Judge] Chhabria noting that Meta was “destined to fail” if the plaintiffs could prove that Meta’s tools created similar works that cratered how much money they could make from their work. But Chhabria also stressed that he was unconvinced the authors would be able to show the necessary evidence. When he turned to the authors’ legal team, led by high-profile attorney David Boies, Chhabria repeatedly asked whether the plaintiffs could actually substantiate accusations that Meta’s AI tools were likely to hurt their commercial prospects. “It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected,” he told Boies. “It’s not obvious to me that is the case.”
> When defendants invoke the fair use doctrine, the burden of proof shifts to them to demonstrate that their use of copyrighted works is legal. Boies stressed this point during the hearing, but Chhabria remained skeptical that the authors’ legal team would be able to successfully argue that Meta could plausibly crater their sales. He also appeared lukewarm about whether Meta’s decision to download books from places like LibGen was as central to the fair use issue as the plaintiffs argued it was. “It seems kind of messed up,” he said. “The question, as the courts tell us over and over again, is not whether something is messed up but whether it’s copyright infringement.”
by Workaccount2 on 5/5/25, 2:37 PM
1. Training AI on freely available copyright - Ambiguous legality, not really tested in court. AI doesn't actually directly copy the material it trains on, so it's not easy to make this ruling.
2. Circumventing payment to obtain copyright material for training - Unambiguously illegal.
Meta is charged with doing the latter, but it seems the plaintiffs want to also tie in the former.
by TimPC on 5/5/25, 1:24 PM
by ryandrake on 5/5/25, 3:12 PM
by labrador on 5/5/25, 1:21 PM
by dragonwriter on 5/5/25, 3:43 PM
by ebfe1 on 5/5/25, 3:10 PM
by granzymes on 5/5/25, 5:17 PM
by codedokode on 5/5/25, 5:08 PM
Also I read that ordinary folks have been arrested for filming in the cinema even if they did not redistribute the video (due to being arrested). Again, it is unfair why they get arrested and Zuckerberg doesn't.
by caminanteblanco on 5/5/25, 3:16 PM
by codedokode on 5/5/25, 7:20 PM
by ineedasername on 5/5/25, 9:46 PM
When the scale is a significant portion of all human text output ever, I don't think we're in the realm of any prior model. This is now something closer to how society attempts to approach natural resources like land, frequency bands, utility right-of-way, etc. I think this is the direction that laws and legislation should look to go. Or maybe not, I don't claim to have the answer, only that existing models are inadequate.
by PeterStuer on 5/6/25, 8:25 AM
However, claiming llama is not a 'substantial transformation' of the information used to build it seems untennable.
The complaint feels to me more like the paint factory claiming rights to the paintings you created with it's paint, rather than a classic pirate DVD copier that just resells copies.
Maybe a midway could be some Google Books like solution where you can still find anything but where the output is restricted to just substantial fragments and not complete verbatim chapters?
I do not believe people use llama to 'read published books on the cheap'.
by TrnsltLife on 5/5/25, 4:29 PM
Will the neural network (LLM) itself become illegal? Will its outputs be deemed illegal?
If so, do humans who have read an illegally downloaded book become illegal? Do their creative outputs become illegal?
by jayd16 on 5/5/25, 4:10 PM
What is the substantive difference between training a model locally using these works that are presumably pulled in from some database somewhere and Napster, for example?
Would a p2p network for sharing of copyrighted works be legal if the result is to train a model? What if I promise the model can't reproduce the works verbatim?
by adingus on 5/5/25, 1:36 PM
by moregrift on 5/5/25, 5:23 PM
by RajT88 on 5/5/25, 1:43 PM
I have this debate with a friend of mine. He's terrified of AI making all of our jobs obsolete. He's a brilliant musician and programmer both, so he's both enthused and scared. So let's go with the Swift example they use.
Performance Artists have always tried to cultivate an image, an ideal, a mythos around the character(s). I've observed that as the music biz has gotten more tough, that the practice of selling merch at shows has ramped up. Social media is monetized now. There's been a big diversification in the effort to make more money from everything surrounding the music itself. So too will it be with artists.
You're starting to see this already. Artists which got big not necessarily because of the music, but because of the weird cult of personality they built. One who comes to minds is Poppy, who ironically enough built a cult of personality around her being a fake AI bot...
https://en.wikipedia.org/wiki/Poppy_(singer)
You've definitely got counter-examples like Hatsune Miku - but the novelty of Miku was because of the artificiality (within a culture that, like, really loves robots and shit). AI pop stars will undoubtedly produce listenable records and make some money, but I don't expect that they will be able to replace the experience of fans looking for a connection with an artist. Watch the opening of a Taylor Swift concert, and you'll probably get it.
by terbo on 5/5/25, 8:20 PM
by option on 5/5/25, 3:45 PM
by zoobab on 5/5/25, 4:02 PM
by codr7 on 5/5/25, 5:27 PM
by throwacct on 5/5/25, 2:26 PM
by nottorp on 5/5/25, 6:19 PM
by penguin_booze on 5/5/25, 4:15 PM
by steele on 5/5/25, 2:04 PM
by kazinator on 5/5/25, 2:05 PM
by Mbwagava on 5/5/25, 1:37 PM
It's going to take centuries to undo the damage wracked by IP-supported private enterprise. And now we also have to put up with fucking chatbots. This is the worst timeline.