by adityamwagh on 5/29/24, 6:55 PM with 42 comments
by rdtsc on 5/29/24, 10:35 PM
That just sounds so strange. People will have to sign promises that their model is safe from possible end-user harm. It seems like they are setting up something for selective enforcement.
> The California government is gearing up for the “increased incarceration costs” and other costs associated with criminalizing AI development if this bill passes
That is just so odd. Weren't they the state focusing intently on quicker rehabilitation and alternatives to incarceration but now they are preparing to incarcerate more people over AI models? What is even going on that state.
What's the obvious loophole for the companies here. Leave the state? Or can they just say "This model was designed and built in Nevada/Kansas/Kentucky/etc. Us (OpenAI) are just licensing this as an end-user from our partners in Kentucky, sorry, CA attorney general, /shrug/?".
by ApolloFortyNine on 5/29/24, 10:37 PM
The biggest points imo being
>The Frontier Model Division has a wide scope to define the “perjury” charges the bill attaches to developers for end-user misconduct, including “the rigor and detail” of the developer’s safety protocols
>So is any model trained with less compute but matching a 10^26 flops-model in “relevant benchmarks” (the bill doesn’t say which)
>So is any model trained with less compute AND lower benchmarks, so long as it is of “similar general capability” (we do not know what this means)
Actually legislating that the developer of the AI is liable for what end users do is absolutely wild. And the other clauses they added make this essentially apply to any developer AI that can form a sentence. They made the categories so broad that an argument could be made that the law applies to any AI at all.
by trhway on 5/29/24, 10:50 PM
by 65a on 5/30/24, 12:35 AM
by XEKEP on 5/29/24, 11:00 PM
Is this law even constitutional? Doesn't it violate the first amendment?
by gmuslera on 5/29/24, 11:28 PM
Using geography-limited legislation over things that are used countrywide or worldwide. for this or other laws from there or elsewhere is a bit overreaching. In the end, if the senate elected democratically for a territory, in representation of their citizens, decide something, is ok, as long as it only affects the people of that territory. Things gets very wrong when your elected governments impose their opinion over other territories/countries, there should a name for that.
by Spivak on 5/29/24, 10:33 PM
These models are really cool and novel don't get me wrong but they're self-reading books on their training data and ought to be regulated as such. But hey, if it means that some other state is gonna be the hub for AI I'm all for it.
by foolswisdom on 5/29/24, 10:37 PM
Some domains do require regulations updates do to changes in the field, in which case it may make sense to have a commission (sorta like they're doing here), but based on this post, they basically just gave a carte blanche to disallow everything (while not directly stating that nothing is allowed).
by option on 5/29/24, 10:53 PM
Sorry future startups in this field - I guess when there are enough compute in few years to start hitting this bill definitions, you will need to move elsewhere.
by jjtheblunt on 5/29/24, 10:23 PM
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away."
attributed (perhaps not perfect translation) to Antoine de Saint-Exupery
by Aloisius on 5/30/24, 2:11 AM
How exactly does that make sense?
That's like only regulating fighter jet safety while letting passenger jets do whatever they want.
by dools on 5/29/24, 10:52 PM
by aeternum on 5/29/24, 10:28 PM
by moneycantbuy on 5/29/24, 10:30 PM
by jackcviers3 on 5/29/24, 10:48 PM
by haswell on 5/29/24, 10:58 PM
It seems reasonable to regulate specific aspects of safety, e.g. it shouldn’t be possible to derive the formulas for chemical or nuclear weapons. If an AI company cannot be certain of this, it seems reasonable for the government to impose restrictions on the handling of such models. Where things get murky is how to define risk for things that are less obvious.
There is deeply embedded in the tech community an ethos and belief that core tech is neutral, the potential for harmful use is an acceptable reality. I have lived and breathed this for 20 years.
But information is not neutral, and the output of these models is derived from a corpus of information that is anything but neutral. A subset of this information is so incredibly harmful that the risk of its inclusion is high enough to warrant significant effort in ensuring safety.
Again, assuming a sufficiently advanced model, might it be reasonable that the onus is on the creator of the model to understand what it can do and certify what it cannot do?
Many have commented that the non-regulation of the early Internet was necessary and good, and that the same should be true for AI. But the eras are very unlike each other. The entire technology industry was still forming, and no one knew what would come of it. What grew was incredible. But with it came the data brokers, the worst aspects of social media, etc.
It’s against this backdrop of an increasingly enshittefied landscape that regulators are looking at this potentially seismic shift in computing and not feeling willing to let the companies that cooked up the current mess go headlong into something potentially far more fantastical (for good or bad) without checks and balances.
I still think the bill is far too vague and just drives AI development out of CA. But I can understand where it’s coming from, and I wouldn’t be surprised to see a push for this nationally, with similarly vague specifications because no one knows how to quantify AI risk.
Edit: To be clear, I'm asking questions and trying to explore the legislator's angle here. I'm not saying I think this kind of legislation will be effective or that this is a good bill.
by apsec112 on 5/29/24, 11:05 PM
https://thezvi.substack.com/p/on-the-proposed-california-sb-...
by brigadier132 on 5/29/24, 10:45 PM
Are Facebook AI researchers going to continue working in California when they can be put in jail? What about all the researchers in the UC system and Stanford?