from Hacker News

California bill would require bots to disclose that they are bots

by alexd127 on 2/7/25, 1:41 AM with 166 comments

  • by wongarsu on 2/7/25, 1:59 AM

    A decade ago, when chat bots were a lot less useful, a common piece of etiquette was that it's fine for a bot to pretend to be a human or God or whatever, but if you directly ask it if it's a bot it has to confirm that. Basically of the bot version of that myth about undercover cops.

    I don't see a downside in requiring public-facing bots to do that

    Not sure if that's what the proposal is about though, it's currently down

  • by skylerwiernik on 2/7/25, 2:49 AM

    This is clearly undisclosed promotion for vetto.app. alexd127's only other account activity is on this thread [https://news.ycombinator.com/item?id=42901553] for the exact same bill.
  • by cebert on 2/7/25, 1:42 AM

    I wish this legislation would also apply to AI generated emails, sales outreach, and LinkedIn messages.
  • by rappatic on 2/7/25, 1:47 AM

    The requirement doesn't kick in until 10 million monthly US users. I don't see why this shouldn't apply to smaller businesses.
  • by evil-olive on 2/7/25, 2:09 AM

    since the Veeto website seems to be struggling, here's the official CA legislature page for the bill: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...

    seems fairly narrowly written - it looks like it's removing the requirement that bot usage is illegal only if there's "intent to mislead". it seems like that'd be very difficult to prove and would result in the law not really being enforced. instead there's a much more bright-line rule - it's illegal, unless you disclose that it's a bot, and as long as you do that, you're fine.

    once I was able to load the Veeto page, I noticed there's a "chat" tab with "Ask me anything about this bill! I'll use the bill's text to help answer your questions." - so somewhat ironically it seems like the bill would directly effect this Veeto website as well, because they're using a chatbot of some kind.

  • by nico on 2/7/25, 1:57 AM

    Interesting. I’m afraid this won’t really go anywhere, but it’s a good conversation to have

    On one hand, judging by the comments, there’s quite a bit of interest on disclosure

    On the other hand, corporations and big advertisers (spammers?) might not really want it. Or is there a positive aspect in disclosure for them?

  • by tzury on 2/7/25, 3:15 AM

    Industry will get there pretty soon regardless of that bill or another, since there is a paradigm shift.

    The conversation is no longer about scraping bots versus genuine human visitors. Today’s reality involves legitimate users leveraging AI agents to accomplish tasks—like scanning e-commerce sites for the best deals, auto-checking out, or running sophisticated data queries. Traditional red flags (such as numerous rapid requests or odd navigation flows) can easily represent honest customer behavior once enhanced by an AI assistant.

    see what I have posted a couple of weeks ago -

    https://blog.tarab.ai/p/bot-management-reimagined-in-the

  • by doctorpangloss on 2/7/25, 1:48 AM

    Should PUBG mobile players be told they are winning against bots?

    Should psychics tell you they cannot really speak for the dead?

  • by aithrowawaycomm on 2/7/25, 3:23 AM

    I understand being against this law on practicality / constitutionality grounds. It seems to me that the "converse" law would be more useful and legally appropriate: forbid people from making intentionally deceptive bots that claim to be human or have human-like emotional/social capabilities.
  • by rhelz on 2/7/25, 2:41 AM

    Alternate proposal: For every meme, we are by law able to see who created that meme. If all the knuckle-heads who fall for every meme could see, e.g., that it was created by a Russian or Chinese propaganda organization, I think they would soon learn to be more skeptical.
  • by ars on 2/7/25, 2:05 AM

    So basically all the "good" bots will follow the law, and all the scamming and problematic bots won't, giving people a false sense of security, and leaving everyone worse off than they were before.
  • by benatkin on 2/7/25, 2:12 AM

    If they figured out how to enforce this effectively, the role of something like OpenAI’s Operator could end up being played by a human much of the time. Then there would be click farms similar to call centers. Presumably an AI could watch as long as humans were clicking, which could make click farms more viable through real time quality control. This would also be a boon for projects like Worldcoin to verify distinct humans. I’m having trouble coming up with a scenario in which this works well.
  • by dhdjruf on 2/7/25, 2:59 AM

    A even better law would be to force people who are getting paid to post to disclose which company they are working for. A shill disclosure law.

    The problem I see with this is that us company's will be forced to follow it, but foreign ones will subvert it. Knowing that foreign companies will just break the law us companies will offshore these jobs.

    No shilling no bots, I think all consumers would agree to vote for such bills.

  • by deepsun on 2/7/25, 3:53 AM

    So Californian/US companies would have to comply, and other companies/states would get an advantage.
  • by xtiansimon on 2/7/25, 1:00 PM

    I say this to phone spammers all the time—are you a bot? If you are you have to tell me. Haha. Reminds me of a conversation I had with a CSR once, who gave their name as “Maryjane”. I asked, on an impulse, is that your real name? They said, No. It’s a jungle out there. Good for CA.
  • by xnx on 2/7/25, 2:14 AM

    I like that this bill partially implements the second law of robotics:

    "1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law."

  • by nico on 2/7/25, 2:01 AM

    Archive link to the overview: https://archive.ph/U7q9u

    And full text here: https://archive.ph/AYnoR

  • by soheil on 2/7/25, 2:41 AM

    I get this eerie feeling she has no idea what a bot even is in a technical sense and most likely is being puppeteered by weasels like State Senator Scott Wiener having introduced gems like the AI Models Act [1][2][3]. If the bill passes wouldn't then she have to be the first to disclose she's a bot?

    [1] https://www.answer.ai/posts/2024-04-29-sb1047.html (SB-1047 will stifle open-source AI and decrease safety)

    [2] https://safesecureai.org/responseletter (Response to inaccurate, inflammatory statements by Y Combinator & a16z regarding Senate Bill 1047)

    [3] https://news.ycombinator.com/item?id=41690302 (Gavin Newsom vetoes SB 1047)

  • by amatuer_sodapop on 2/7/25, 3:03 AM

    > The bill updates the legal definition of "bot" to encompass accounts operated by generative artificial intelligence - including systems that create synthetic images, videos, audio, and text.

    I'm not sure how enforceable that is tbh.

  • by bozhark on 2/7/25, 1:50 AM

    It should be like ads.

    Required a watermark that is NIST standardized

    edit: sorry I though y’all would get the joke

  • by lostmsu on 2/7/25, 3:34 PM

    Shame, this makes my Turing test battle royale game https://trashtalk.borg.games/ illegal in California.
  • by acomjean on 2/7/25, 2:15 AM

    what if they don't know they are bots like in this short film:

    https://www.youtube.com/watch?v=4VrLQXR7mKU

    or any blade runner..

  • by smsm42 on 2/7/25, 2:04 AM

    Who wants to bet this ends up in prop-65-like list of disclaimers about алл тхе automation tools used for every message, which will make the messages bigger but won't make them any less frequent?
  • by ideashower on 2/7/25, 2:29 AM

    This is a cool website, would love to see more like it in other states.
  • by gnabgib on 2/7/25, 2:36 AM

    More discussion 5 days ago (77 points, 73 comments) https://news.ycombinator.com/item?id=42901553
  • by SlightlyLeftPad on 2/7/25, 3:07 AM

    Is this feasibly enforceable? The types of organizations that employ bots seem like the same type to not care about the California laws.
  • by ianferrel on 2/7/25, 3:24 AM

    Seems only fair, given how often I have to check a box to prove I'm not one.
  • by ChrisArchitect on 2/7/25, 3:57 AM

    Aside: what is this Veeto site all about? How long has it been around?
  • by TriangleEdge on 2/7/25, 3:52 AM

    Given we are past the Turing test, how are they going to enforce this?
  • by petermcneeley on 2/7/25, 4:08 AM

    What if more human than human is your motto?
  • by johndhi on 2/7/25, 2:53 AM

    Wait... What? There is already a law for this exact purpose in California. It's called the BOT Act from 2019

    https://dailyjournal.com/articles/379909-california-s-bolste...

  • by krustyburger on 2/7/25, 1:54 AM

    Seems to be down.
  • by leeeeeepw on 2/7/25, 3:03 AM

    Seems impossible
  • by santusantu on 2/7/25, 3:15 PM

    Hack a phone
  • by blibble on 2/7/25, 2:43 AM

    hopefully applies to crawlers too
  • by barfingclouds on 2/7/25, 4:12 AM

    Yes please
  • by beanjuiceII on 2/7/25, 3:24 AM

    just another thing that will hurt the small guy somehow, and barely bother the big guy
  • by Waterluvian on 2/7/25, 2:32 AM

    “I’m Mr. Meeseeks, look at me!”
  • by waltercool on 2/7/25, 3:13 AM

    How are they going to know who is a bot?

    Yet another dead law. This is like asking a criminal to not use weapons.

  • by rickcarlino on 2/7/25, 1:48 AM

    Are you a bot? You gotta tell me if you’re a bot. /s
  • by rufus_foreman on 2/7/25, 1:58 AM

    How do I know if I'm a bot?
  • by DrillShopper on 2/7/25, 1:57 AM

    Nah dude * hits bongs, coughs * if you ask if they're a bot they _have_ to say yes