by robinduckett on 7/1/23, 8:42 AM with 4 comments
It seems to me that these sorts of things are happening because the social media sites would like the AI companies to pay them for access to their vast amounts of unlabelled training data.
I feel like this is at a real detriment to the users of the platform and that locking things up like this is a risk. How do they calculate, that the risk of ruining their platform, for the potential reward of an AI company chucking them some cash to access stuff that we submitted to them for free?
by skilled on 7/1/23, 8:50 AM
The idea was there before ChatGPT came about, it just happens to be the perfect trojan horse to get the ball rolling and let people think it’s the one and not the other.
by gmerc on 7/1/23, 2:26 PM
by eimrine on 7/1/23, 9:13 AM
One of my favourite social media is [1]. A colossal Web1 forum dedicated to a super-narrow topic, with no bullshit like: apps, phone number verification, scum advertisement, artificial limiting of devices who have an access to a content. How so-called Artificial Intelligence can harm [1]? How it can harm HN?