by antiviral on 11/27/23, 4:39 PM with 157 comments
by avalys on 11/27/23, 5:44 PM
Do carmakers “capitalize on vulnerability” when they advertise pickup trucks as big tough vehicles for tough, outdoorsy men?
Do providers of health insurance for pets “capitalize on vulnerability” when they say you need to buy their product if you love your pet?
At some point people need to be responsible for their own decisions. And I can’t get that worked up about Meta’s free product.
by Curvature5868 on 11/27/23, 7:22 PM
Previously, children's exposure to marketing and propaganda was mostly confined to their entertainment hours, during which they watched television or read magazines. There was at least some hope for moderation. However, "apps" have blurred these boundaries, as the same devices used for education and social interaction are also channels for persistent advertising and messaging, making it harder to limit exposure to just "entertainment" time.
by splitwheel on 11/27/23, 6:30 PM
For teen girls - the apps are designed to scare them about being socially excluded. For teen boys - the apps are designed to fill their need to master skills.
The issue that the government has to deal with with app addictions is self harm attempts by girls (e.g. emergency room visits) and underperformance of boys in the real world (e.g. low college enrollment).
If you are trying to make an addictive app, this is a good reference to understand the science: https://www.amazon.com/Hooked-How-Build-Habit-Forming-Produc...
BJ Fogg is a good reference too: https://www.bjfogg.com
by ericra on 11/27/23, 7:32 PM
Isn't this just how all big tech companies operate as a normal business practice? Certainly Youtube is no better when it comes to targeted content and advertisements to children to their detriment.
My main point is that I don't think it makes any difference whether Meta has some internal document proving that they specifically target children with these practices. The problem is so much bigger than a single policy or company, and legislatures need to figure out a better way to address the overarching problems. I don't have much faith that these one-off lawsuits will make that much of an impact given that they almost always lead to some fine or settlement that is an acceptable business loss for the company.
I'm all for Meta being decimated by a thousand cuts in the form of lawsuits from various levels of government, but at best it would just be replaced with something else unless more regulation exists at the top levels (US / EU / etc).
by siliconc0w on 11/27/23, 6:55 PM
I think a core class that should be taught is how to safety deserialize sensory input as to avoid causing RCEs. Or basically 'patching' these known vulnerabilities.
by antiviral on 11/27/23, 4:40 PM
by crowcroft on 11/27/23, 6:18 PM
With broadcast media like TV, I can see what the programming is, and I can watch the same ads that every other house is getting broadcast to know what's being shown to kids (and research companies do this). Similarly for retail media, I can go to a store and see what a retailer is doing.
For Meta with AI newsfeeds and targeted ads, it's impossible to know exactly what any one persons experience is. I don't know the veracity of this specific case is but as a minimum I think there should be some legislation that force these companies to be auditable in some way...
by NickC25 on 11/27/23, 5:40 PM
Above all else since turning public, Meta is in the business of making money. It's not illegal to target user's vulnerabilities in order to get the user to spend more time or money on their platform. It's unethical as hell, but it's business 101 - the shareholders would revolt if Zuck came out and said "here's this opportunity to make you all a ton of money, but we're placing our personal ethics above doing this, so we're not". He'd get sued for breach of fiduciary duty.
Now, are Meta's product strategies unethical (or questionably ethical), harmful to society, and setting bad precedent? Yeah, I'd agree with that. But the market and shareholders like money.
by zooq_ai on 11/27/23, 5:52 PM
This case is basically projecting everyone's misplaced hate of social media without doing a proper controlled experiment of it's benefits/harm to the society.
You can't do controlled experiments on humans and hence the states have no case except overreach. If they really want to cater to their constituents then pass specific laws.
by 1vuio0pswjnm7 on 11/27/23, 11:44 PM
https://ia800508.us.archive.org/12/items/gov.uscourts.cand.4...
Employee names are still redacted. Given Zuckerberg's views on privacy, one wonders why they should remain "anonymous".
by stanislavb on 11/27/23, 7:11 PM
by tomohawk on 11/27/23, 7:33 PM
https://nypost.com/2023/11/25/metro/jewish-teacher-hides-in-...
by extr on 11/27/23, 8:12 PM
by iteratethis on 11/27/23, 11:48 PM
How do you regulate legal but unethical? You can't. So let's make it illegal. But how?
Maximum notifications per day? Deep introspection of the actual content? Good and bad influencers? Curfue? It's impossible to codify this into law, unless you're China.
by ShamelessC on 11/27/23, 7:57 PM
by hnburnsy on 11/27/23, 10:13 PM
by nojvek on 11/28/23, 1:50 PM
With Twitter even if I pay, still get the same number ads.
I want to customize what is shown in my feed.
by dmatech on 11/27/23, 6:32 PM
I suppose that the broader concern is over precisely what duties a company has to its customers. They obviously have the duty to be truthful when making offers, but every customer relationship will have an adversarial component where each party benefits at the other's expense (or at the expense of third parties). In cases like a bar serving alcohol to customers, there's usually some responsibility to prevent patrons from getting extremely intoxicated and getting in a car. But that case involves a clear signal that someone is dangerous. Facebook doesn't know if someone's grades are suffering or if they're having mental health issues. It doesn't know if it should tell the user to "touch grass".
by xkekjrktllss on 11/27/23, 5:55 PM
by haltist on 11/27/23, 6:50 PM