by woodgrainz on 7/5/19, 3:49 AM with 264 comments
by jlangenauer on 7/5/19, 7:47 AM
The root of the problem is not that we have bots, but that we have normalised lying and deception as part of everyday business. We allow companies to pretend that bots are human beings, and allow call-center employees in third-world countries to pretend (even sometimes though elaborate lying) that they are located in the same country as you. We allow companies to tell outrageous untruths in their advertising - see the Samsung ad which they're currently being hauled over the coals for in Australia.
That's the real problem here, and the one we need to fix on a general level, not by band-aid regulations over whichever dishonesty has managed to irritate enough state representatives.
by dessant on 7/5/19, 6:04 AM
Needless to say I have spent a couple of minutes repeatedly asking a question, and even rephrasing it while being frustrated that this person does not seem to grasp my issue.
by throwaway13337 on 7/5/19, 7:20 AM
If individual states each enact individual laws governing the internet, then only large companies will have the resources to follow them.
We'll see a balkanization of the web wherein it's no longer very world wide. Small internet businesses will become harder and harder to start. Big monopolies will become entrenched.
It's not pretty.
by AnthonyMouse on 7/5/19, 5:36 AM
by wickedlogic on 7/5/19, 3:40 PM
If I launch a new tab in the background and tell it go establish some set of factors for me, or locate price points and details for me, or buy something for me (and right now as me)... or just have it let me browser and interactively direct it but have it block ads as I go.
I know the law, and lawmakers, are looking at this from a fraudulent content perspective, but they are going to be hard pressed to do anything in long run to quell this.
by mirimir on 7/5/19, 5:36 AM
I doubt that anyone running bots, and who is technically competent, will be identifiable or findable. I mean, I could do it, and I'm just a random anonymous coward.
by CM30 on 7/5/19, 12:00 PM
For instance, ARGs could have bot accounts for fictional characters on social media sites. These accounts could give pre recorded messages that then hint that the user should visit some third party site for more clues or information. Is that legally dubious? I can see it being so under this law, but I don't think it's comparable to a business running say an automated chat support system and pretending its bots are human.
Same goes with roleplaying bots on online community sites. These aren't a huge thing right now, but they could be in future, with accounts that act like NPCs do in video games or interact with the players account in side quests or what not. These don't seem like they'd be morally 'wrong' things to have on a site, but they'd probably get hit by this law regardless.
Point is, these types of bots don't necessarily only have dodgy use cases.
by alexheikel on 7/5/19, 12:12 PM
by modernerd on 7/5/19, 12:32 PM
This time we've had to introduce some changes to abide by new state legislature.
Messages entered into the chat console must be followed immediately by the string " [I am a bot]", whether you are a bot or a human, but especially if you are a human.
Good luck and have fun!
by dbieber on 7/5/19, 5:58 PM
by gnicholas on 7/5/19, 5:59 PM
These folks are essentially like bots, insofar as they are "programmed" to respond and significantly constrained in their latitude. They're like human bots, no?
by sailfast on 7/5/19, 2:55 PM
I’d argue yes, but where they may have an argument is where I respond “no I am human” if people ask me about being a bot - intentionally misleading.
They may have more luck here in the commercial space where they can better regulate and enforce these ruleS like advertising and other sales practices. Not sure where this goes in politics or other domains at all in terms of enforeceability.
by gregoryexe on 7/5/19, 3:53 PM
by msh317 on 7/5/19, 6:59 PM
Bad idea
by inlined on 7/5/19, 5:23 AM
by ajflores1604 on 7/5/19, 4:03 PM
by unreal37 on 7/5/19, 3:41 PM
by sneak on 7/5/19, 6:23 AM
Human beings have human rights to express themselves however they wish.
by threezero on 7/5/19, 9:55 PM
by archy_ on 7/5/19, 5:28 PM
by mcantelon on 7/5/19, 5:35 PM
by shultays on 7/5/19, 9:26 AM
https://twitter.com/SamsungSupport/status/114041514667572838...
And it even uses a human name. Really dishonest.
by ourmandave on 7/5/19, 2:50 PM
by KrishMunot on 7/5/19, 5:01 PM
by cgb223 on 7/5/19, 5:00 PM
by vinniejames on 7/5/19, 12:25 PM
by frigfog on 7/5/19, 6:04 AM
by sjg007 on 7/5/19, 2:19 PM
by zn44 on 7/5/19, 12:45 PM
by buboard on 7/5/19, 9:01 AM
by Fjolsvith on 7/5/19, 5:08 AM
by scarejunba on 7/5/19, 5:31 AM
by arunbahl on 7/5/19, 6:50 AM
by repolfx on 7/5/19, 9:15 AM
"When you ask experts how bots influence politics—that is, what specifically these bits of computer code that purport to be human can accomplish during an election—they will give you a list: bots can smear the opposition through personal attacks; they can exaggerate voters’ fears and anger by repeating short simple slogans; they can overstate popularity; they can derail conversations and draw attention to symbolic and ultimately meaningless ideas; they can spread false narratives."
Since when can bots "smear the opposition through personal attacks"? Bots that post the same stuff written by humans over and over have existed for years and are easily filtered out by spam filters - bulk spam doesn't change people's politics anyway so in practice such bots are always advertising commercial products. Bots that constantly invent new ways to smear the opposition don't yet exist, not even in the lab.
This whole story is asserting that there are programs routinely running around the internet indistinguishable from humans, making points so excellent they successfully persuade people to switch their political affiliation, which is simply false.
In the article the word "experts" is a hyperlink. I was very curious what kind of bot expert might believe these fantasies. To my total lack of surprise the link goes to a single "expert" who in fact knows nothing about AI, bots or technology in general - they're a political flak who worked for the Obama campaign and studied a PhD in "communication".
This sort of manipulative deception is exactly why so many people no longer trust the media. The New Yorker runs an article that starts by asserting a fantasy as expert-supported fact, and then cites a member of the Obama campaign who went into social science academia (i.e. a field that systematically 'discovers' things that are false), and who has no tech background or indeed any evidence of their thesis whatsoever.
My experience has been that actual experts in bots are never approached for this sort of story.