from Hacker News

California law banning bots from pretending to be real people without disclosure

by woodgrainz on 7/5/19, 3:49 AM with 264 comments

  • by jlangenauer on 7/5/19, 7:47 AM

    I am reminded of Thoreau's quote: "There are a thousand hacking at the branches of evil to one who is striking at the root."

    The root of the problem is not that we have bots, but that we have normalised lying and deception as part of everyday business. We allow companies to pretend that bots are human beings, and allow call-center employees in third-world countries to pretend (even sometimes though elaborate lying) that they are located in the same country as you. We allow companies to tell outrageous untruths in their advertising - see the Samsung ad which they're currently being hauled over the coals for in Australia.

    That's the real problem here, and the one we need to fix on a general level, not by band-aid regulations over whichever dishonesty has managed to irritate enough state representatives.

  • by dessant on 7/5/19, 6:04 AM

    I felt stupid, but I have fallen victim to this on dpd.com while tracking a package. A helpful chat popup has appeared where I could request assistance from a support agent, but it was not disclosed that the agent is a bot.

    Needless to say I have spent a couple of minutes repeatedly asking a question, and even rephrasing it while being frustrated that this person does not seem to grasp my issue.

  • by throwaway13337 on 7/5/19, 7:20 AM

    California is setting a dangerous precident.

    If individual states each enact individual laws governing the internet, then only large companies will have the resources to follow them.

    We'll see a balkanization of the web wherein it's no longer very world wide. Small internet businesses will become harder and harder to start. Big monopolies will become entrenched.

    It's not pretty.

  • by AnthonyMouse on 7/5/19, 5:36 AM

    That will be a fun one to interpret. So if I use autocorrect, am I a bot? What if the device makes next word suggestions and I use them? What if the device suggests the entire post, but I manually approve it? What if it suggests five separate posts and I approve them all at once?
  • by wickedlogic on 7/5/19, 3:40 PM

    Bots are going to be the way we interact with the web (and really all systems) heading forward, this 'real people' at just 'browsers' is quite a misunderstanding of what a 'user-agent' really means in this day and age.

    If I launch a new tab in the background and tell it go establish some set of factors for me, or locate price points and details for me, or buy something for me (and right now as me)... or just have it let me browser and interactively direct it but have it block ads as I go.

    I know the law, and lawmakers, are looking at this from a fraudulent content perspective, but they are going to be hard pressed to do anything in long run to quell this.

  • by mirimir on 7/5/19, 5:36 AM

    > Violators could face fines under state statutes related to unfair competition.

    I doubt that anyone running bots, and who is technically competent, will be identifiable or findable. I mean, I could do it, and I'm just a random anonymous coward.

  • by CM30 on 7/5/19, 12:00 PM

    Of course, this all raises the question of whether there are only unethical use cases for bots pretending to be human, or whether a law like this could hit benign uses for bots as well.

    For instance, ARGs could have bot accounts for fictional characters on social media sites. These accounts could give pre recorded messages that then hint that the user should visit some third party site for more clues or information. Is that legally dubious? I can see it being so under this law, but I don't think it's comparable to a business running say an automated chat support system and pretending its bots are human.

    Same goes with roleplaying bots on online community sites. These aren't a huge thing right now, but they could be in future, with accounts that act like NPCs do in video games or interact with the players account in side quests or what not. These don't seem like they'd be morally 'wrong' things to have on a site, but they'd probably get hit by this law regardless.

    Point is, these types of bots don't necessarily only have dodgy use cases.

  • by alexheikel on 7/5/19, 12:12 PM

    I can say that people like humans way better than bots. We have an app that looks like a bot but is a real person, and we say it at the beginning, but people don’t believe. Once they figure it out, they go crazy. It’s always local people by the way. So I think at some point, bots should be at least explain what they are without pretending they are humans.
  • by modernerd on 7/5/19, 12:32 PM

    Welcome to the Fifth Annual Californian Turing Test Chatbot Hackathon!

    This time we've had to introduce some changes to abide by new state legislature.

    Messages entered into the chat console must be followed immediately by the string " [I am a bot]", whether you are a bot or a human, but especially if you are a human.

    Good luck and have fun!

  • by dbieber on 7/5/19, 5:58 PM

  • by gnicholas on 7/5/19, 5:59 PM

    What about customer support or sales agents who are operating solely off a script? I was contacted by a sales agent of a "lead generation" company, who was being impersonated by her off-shore workers. Eventually during our email exchange the real person jumped in (but did not announce that the prior correspondence was with someone in an offshore sales center. They use these agents to pretend to be their clients, and based on the not-great experience I had when I was initially contacted (email was not well written, said things about my company that wasn't quite right), I would not use them.

    These folks are essentially like bots, insofar as they are "programmed" to respond and significantly constrained in their latitude. They're like human bots, no?

  • by sailfast on 7/5/19, 2:55 PM

    “Bots are not people” - ok, but if they’re written by people for a specific intent meant as speech, is that no longer protected speech? If I program a bot to chant slogans on Twitter, isn’t that something I’d have the right to do?

    I’d argue yes, but where they may have an argument is where I respond “no I am human” if people ask me about being a bot - intentionally misleading.

    They may have more luck here in the commercial space where they can better regulate and enforce these ruleS like advertising and other sales practices. Not sure where this goes in politics or other domains at all in terms of enforeceability.

  • by gregoryexe on 7/5/19, 3:53 PM

    I'm confident it will be as effective as the do not call registry.
  • by msh317 on 7/5/19, 6:59 PM

    This is a horrible idea, California doesn't have the ability to enforce such a law, and hackers would simple operate from outside California

    Bad idea

  • by inlined on 7/5/19, 5:23 AM

    Political ads on tv require disclosure. I see no reason disclosure rules shouldn’t apply on internet campaigns of all forms.
  • by ajflores1604 on 7/5/19, 4:03 PM

    Does anyone have any insight if this would affect market trading algorithms and bots? The article says the law is "requiring that they reveal their “artificial identity” when they are used to sell a product", but I'm not sure how broad of a definition they want for the word "selling".
  • by unreal37 on 7/5/19, 3:41 PM

    This will bring the end of Twitter. A website where 75% of the content is bots.
  • by sneak on 7/5/19, 6:23 AM

    Bots are communications tools configured and deployed by human beings, at least until they can pass the Turing Test.

    Human beings have human rights to express themselves however they wish.

  • by threezero on 7/5/19, 9:55 PM

    I get the feeling that the commercial part of this law might hold up, but the election part is highly questionable based on Supreme Court rulings.
  • by archy_ on 7/5/19, 5:28 PM

    Doesn't this effectively outlaw services that scrape your bank accounts (but don't have an official API to work with)?
  • by mcantelon on 7/5/19, 5:35 PM

    Sounds unworkable, but is likely a pretext for some other effort (like undermining anonymity).
  • by shultays on 7/5/19, 9:26 AM

    Some companies use bots for their support twitters. Samsung for example had a twitter bot that replies everyone

    https://twitter.com/SamsungSupport/status/114041514667572838...

    And it even uses a human name. Really dishonest.

  • by ourmandave on 7/5/19, 2:50 PM

    As if millions of Facebook accounts cried out in terror, and were suddenly silenced.
  • by KrishMunot on 7/5/19, 5:01 PM

    Does this have to do with the Google AI booking salon appointments over the phone?
  • by cgb223 on 7/5/19, 5:00 PM

    Cool, hopefully the spammers on Tinder will respect California law /s
  • by vinniejames on 7/5/19, 12:25 PM

    So this disclosure will now be required with all robocall spam calls?
  • by frigfog on 7/5/19, 6:04 AM

    Well, at least the Turing test is solved.
  • by sjg007 on 7/5/19, 2:19 PM

    It’s a start, it should cover Facebook and Google.
  • by zn44 on 7/5/19, 12:45 PM

    what about people pretending to be bots?
  • by buboard on 7/5/19, 9:01 AM

    But what about humans posing as bots?
  • by Fjolsvith on 7/5/19, 5:08 AM

  • by scarejunba on 7/5/19, 5:31 AM

    If I make a generated face that speaks my stuff with a generated voice, is it a bot?
  • by arunbahl on 7/5/19, 6:50 AM

    What are the implications of this on the Turing Test?
  • by repolfx on 7/5/19, 9:15 AM

    I really hate this stuff. The article starts out with a paragraph of complete nonsense:

    "When you ask experts how bots influence politics—that is, what specifically these bits of computer code that purport to be human can accomplish during an election—they will give you a list: bots can smear the opposition through personal attacks; they can exaggerate voters’ fears and anger by repeating short simple slogans; they can overstate popularity; they can derail conversations and draw attention to symbolic and ultimately meaningless ideas; they can spread false narratives."

    Since when can bots "smear the opposition through personal attacks"? Bots that post the same stuff written by humans over and over have existed for years and are easily filtered out by spam filters - bulk spam doesn't change people's politics anyway so in practice such bots are always advertising commercial products. Bots that constantly invent new ways to smear the opposition don't yet exist, not even in the lab.

    This whole story is asserting that there are programs routinely running around the internet indistinguishable from humans, making points so excellent they successfully persuade people to switch their political affiliation, which is simply false.

    In the article the word "experts" is a hyperlink. I was very curious what kind of bot expert might believe these fantasies. To my total lack of surprise the link goes to a single "expert" who in fact knows nothing about AI, bots or technology in general - they're a political flak who worked for the Obama campaign and studied a PhD in "communication".

    This sort of manipulative deception is exactly why so many people no longer trust the media. The New Yorker runs an article that starts by asserting a fantasy as expert-supported fact, and then cites a member of the Obama campaign who went into social science academia (i.e. a field that systematically 'discovers' things that are false), and who has no tech background or indeed any evidence of their thesis whatsoever.

    My experience has been that actual experts in bots are never approached for this sort of story.