from Hacker News

California Senate Passes SB 1047

by adityamwagh on 5/29/24, 6:55 PM with 42 comments

  • by rdtsc on 5/29/24, 10:35 PM

    > If you want to train a model that could conceivably fall into any of those three categories, you have to sign a document under pain of perjury (a felony) promising the Frontier Model Division that it is safe;

    That just sounds so strange. People will have to sign promises that their model is safe from possible end-user harm. It seems like they are setting up something for selective enforcement.

    > The California government is gearing up for the “increased incarceration costs” and other costs associated with criminalizing AI development if this bill passes

    That is just so odd. Weren't they the state focusing intently on quicker rehabilitation and alternatives to incarceration but now they are preparing to incarcerate more people over AI models? What is even going on that state.

    What's the obvious loophole for the companies here. Leave the state? Or can they just say "This model was designed and built in Nevada/Kansas/Kentucky/etc. Us (OpenAI) are just licensing this as an end-user from our partners in Kentucky, sorry, CA attorney general, /shrug/?".

  • by ApolloFortyNine on 5/29/24, 10:37 PM

    Wild California actually passed this, people joke about that state but I thought this was crazy even for them.

    The biggest points imo being

    >The Frontier Model Division has a wide scope to define the “perjury” charges the bill attaches to developers for end-user misconduct, including “the rigor and detail” of the developer’s safety protocols

    >So is any model trained with less compute but matching a 10^26 flops-model in “relevant benchmarks” (the bill doesn’t say which)

    >So is any model trained with less compute AND lower benchmarks, so long as it is of “similar general capability” (we do not know what this means)

    Actually legislating that the developer of the AI is liable for what end users do is absolutely wild. And the other clauses they added make this essentially apply to any developer AI that can form a sentence. They made the categories so broad that an argument could be made that the law applies to any AI at all.

  • by trhway on 5/29/24, 10:50 PM

    the laws will more and more prohibit LLMs to produce "dangerous" information, "biological weapons" and the likes (down to the maps of public access to beaches). The information is coming from training data. And naturally that way the laws press for removing "dangerous" information from the training data. Once established as a matter of practice that you can't legally access "dangerous" info through the LLM, the natural question arises why you still can access it through the browser, and society will be mentally ready to prohibit the access through the browser too. Voila. Wet dream of the government. If you can't successfully mount frontal attack, you do flank maneuver.
  • by 65a on 5/30/24, 12:35 AM

    Voting for new legislators, personally. I wish they'd do something about PG&E or housing instead of criminalizing software development of chatbots. Truly useless, and I wish we had more choice of non-insane candidates.
  • by XEKEP on 5/29/24, 11:00 PM

    I'd love to hear from a lawyer.

    Is this law even constitutional? Doesn't it violate the first amendment?

  • by gmuslera on 5/29/24, 11:28 PM

    So, just the AI developed, used and hosted only in California, US are covered, or they are trying to legislate over all over the world? They will ban the use of non-compliant models inside the California state borders, while they will could be used freely elsewhere?

    Using geography-limited legislation over things that are used countrywide or worldwide. for this or other laws from there or elsewhere is a bit overreaching. In the end, if the senate elected democratically for a territory, in representation of their citizens, decide something, is ok, as long as it only affects the people of that territory. Things gets very wrong when your elected governments impose their opinion over other territories/countries, there should a name for that.

  • by Spivak on 5/29/24, 10:33 PM

    Cheers to OpenAI who I'm sure is popping champagne that they actually got this garbage passed. I'm genuinely shocked that legislators don't see these models as sources of information -- there's no way they would pass a law requiring libraries certify that readers can't use any of the books for evil, but here we are. And weirder still criminally liable if someone checks out a chemistry textbook and cooks meth.

    These models are really cool and novel don't get me wrong but they're self-reading books on their training data and ought to be regulated as such. But hey, if it means that some other state is gonna be the hub for AI I'm all for it.

  • by foolswisdom on 5/29/24, 10:37 PM

    Okay, so they're regulating AI. What's most noteworthy to me is how applicability appears completely arbitrary and sounds like it could even be set after the fact by virtue of the jury instructions allowed.

    Some domains do require regulations updates do to changes in the field, in which case it may make sense to have a commission (sorta like they're doing here), but based on this post, they basically just gave a carte blanche to disallow everything (while not directly stating that nothing is allowed).

  • by option on 5/29/24, 10:53 PM

    shame on CA senate for being either too corrupt, too stupid or both for passing this. Congratulations to MSFT/OpenAI and Google for successful regulatory capture.

    Sorry future startups in this field - I guess when there are enough compute in few years to start hitting this bill definitions, you will need to move elsewhere.

  • by jjtheblunt on 5/29/24, 10:23 PM

    Wow : you can tell legislators add ad hoc laws galore, which seems so unaligned with the wisdom below.

    "A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away."

    attributed (perhaps not perfect translation) to Antoine de Saint-Exupery

  • by Aloisius on 5/30/24, 2:11 AM

    Wait. The most advanced models have to be safe against mass monetary losses and mass casualties, but not the less advanced ones?

    How exactly does that make sense?

    That's like only regulating fighter jet safety while letting passenger jets do whatever they want.

  • by dools on 5/29/24, 10:52 PM

    Oh well, gotta start somewhere I guess
  • by aeternum on 5/29/24, 10:28 PM

    ChatGPT and Claude can write better legislation than this.
  • by moneycantbuy on 5/29/24, 10:30 PM

    this may make me move out of california
  • by jackcviers3 on 5/29/24, 10:48 PM

    This is awful.
  • by haswell on 5/29/24, 10:58 PM

    As a thought experiment, I think it’s useful to try steel manning this bill, and assume for a minute that the potential risk of a sufficiently advanced and “unsafe” model is real and severe.

    It seems reasonable to regulate specific aspects of safety, e.g. it shouldn’t be possible to derive the formulas for chemical or nuclear weapons. If an AI company cannot be certain of this, it seems reasonable for the government to impose restrictions on the handling of such models. Where things get murky is how to define risk for things that are less obvious.

    There is deeply embedded in the tech community an ethos and belief that core tech is neutral, the potential for harmful use is an acceptable reality. I have lived and breathed this for 20 years.

    But information is not neutral, and the output of these models is derived from a corpus of information that is anything but neutral. A subset of this information is so incredibly harmful that the risk of its inclusion is high enough to warrant significant effort in ensuring safety.

    Again, assuming a sufficiently advanced model, might it be reasonable that the onus is on the creator of the model to understand what it can do and certify what it cannot do?

    Many have commented that the non-regulation of the early Internet was necessary and good, and that the same should be true for AI. But the eras are very unlike each other. The entire technology industry was still forming, and no one knew what would come of it. What grew was incredible. But with it came the data brokers, the worst aspects of social media, etc.

    It’s against this backdrop of an increasingly enshittefied landscape that regulators are looking at this potentially seismic shift in computing and not feeling willing to let the companies that cooked up the current mess go headlong into something potentially far more fantastical (for good or bad) without checks and balances.

    I still think the bill is far too vague and just drives AI development out of CA. But I can understand where it’s coming from, and I wouldn’t be surprised to see a push for this nationally, with similarly vague specifications because no one knows how to quantify AI risk.

    Edit: To be clear, I'm asking questions and trying to explore the legislator's angle here. I'm not saying I think this kind of legislation will be effective or that this is a good bill.

  • by apsec112 on 5/29/24, 11:05 PM

    This is a really biased and misleading summary. In particular, it's vanishingly unlikely that anyone will have criminal penalties applied. More analysis here, going over the entire text of the bill in detail:

    https://thezvi.substack.com/p/on-the-proposed-california-sb-...

    https://thezvi.substack.com/p/q-and-a-on-proposed-sb-1047

  • by brigadier132 on 5/29/24, 10:45 PM

    I think this law is terrible but I'm happy about it from the perspective of being able to have a state self-destruct so gloriously that it can serve as an example for everyone else.

    Are Facebook AI researchers going to continue working in California when they can be put in jail? What about all the researchers in the UC system and Stanford?