by opengears on 2/6/25, 6:52 AM with 13 comments
by echoangle on 2/6/25, 7:25 AM
by keyle on 2/6/25, 7:15 AM
You're not at risk for using LLM other than the content producer being able to tell that the LLM trained on their data; maybe, potentially... It's a stretch even.
Your queries are not being relayed to them, they don't have a backdoor into the LLM content, algos or queries. They merely have a tainted marker, potentially showing, in the output.
LLM providers can always make the claim that they didn't get the tainted data from the source, but got it from another source and that's the ones you should go after; good luck proving the misdirection. I bet it's probably even hard for them to know exactly where this exact output came from, since it's probably been re-ingurgitated 250,000 times.
by fjjjrjj on 2/6/25, 7:16 AM
by hprotagonist on 2/6/25, 1:21 PM
by pcranaway on 2/6/25, 7:25 AM
but as much as i want to agree with the author’s stance:
> [..] no matter how hard these companies try to sell us on AGI or “research” models, you can just laugh until you cry that they really thought they could steal from the well of knowledge and turn around and sell it back to us through SaaS
i feel like at the end, these companies will still win
by renewiltord on 2/6/25, 7:19 AM
Why are all these articles always so fraught with these excessively fearful terms? It’s not real, man. Humanity invented a new tool.
And yeah, people freaked out about search engines too. And it was the same breathless terror. Get a grip.