from Hacker News

"I just bought a 2024 Chevy Tahoe for $1"

by isp on 12/18/23, 12:08 PM with 381 comments

  • by MichaelRo on 12/18/23, 1:40 PM

    I never understand people who engage with chat bots as customer service.

    I find them deeply upsetting, not one step above the phone robot on Vodafone support: "press 1 for internet problems" ... "press 2 to be transferred to a human representative". Only problem is going through like 7 steps until I can reach that human, then waiting some 30 minutes until the line is free.

    But it's the only approach that gets anything done. Talking to a human.

    Robots a a cruel joke on customers.

  • by isp on 12/18/23, 12:08 PM

    A cautionary tale for why not to put unfiltered ChatGPT output directly to customers.

    Nitter mirror: https://nitter.net/ChrisJBakke/status/1736533308849443121

    Related - "New kind of resource consumption attack just dropped": https://twitter.com/loganb/status/1736449964006654329 | https://nitter.net/loganb/status/1736449964006654329

  • by sorenjan on 12/18/23, 12:47 PM

    Someone on Reddit got a really nice love story between a Chevy Tahoe and Chevy Chase from it.

    https://imgur.com/vfHGHW6

    https://imgur.com/JSjNC2c

    https://old.reddit.com/r/OpenAI/comments/18kjwcj/why_pay_ind...

  • by mrweasel on 12/18/23, 1:27 PM

    Can someone who understand LLMs and ChatGPT explain how they expected this to work? It looks like they just had a direct ChatGPT prompt embedded in their site, but what was that suppose to do exactly?

    I can understand having an LLM trained on previous inquiries made via email, chat or transcribed phone calls, but a general LLM like ChatGPT, how is that going to be able to answer customers questions? The information ChatGPT has, specific to Chevrolet of Watsonville can't be anymore than what is already publicly available, so if customers can't find it, then maybe design a better website?

  • by MattDaEskimo on 12/18/23, 2:07 PM

    The more I use and see GPT bots in the wild as public-facing chatbots, the less I see them actually being useful.

    What's the solution here? An intermediate classifier to catch irrelevant commands? Seems wasteful.

    It's almost like the solution needs to be a fine-tuned model that has been trained on a lot of previous customer support interactions, and shut down/redirect anything strange to a human representative.

    Then I ask, why bother using a GPT? It has so much loaded knowledge that is detrimental to it's narrow goal.

    I'm all for chatbots, as a lot of questions & issues can be resolved using them very quickly.

  • by mikecoles on 12/18/23, 12:30 PM

    Was it FL that allowed for price negotiation via values placed in HTML forms? This was decades ago. Websites would send the $-values of products via html elements that the frontend designer wasn't expecting to be modified before the order was sent back from the client. The order system read the values back in and calculated the amount owed using these manipulated values. The naive, fun days of the adolescent web.
  • by remram on 12/18/23, 1:21 PM

    Is there any indication that they will get the car? Getting a chatbot to say "legally binding" probably doesn't make it so. Just like changing the HTML of the catalog to edit prices doesn't entitle you to anything.
  • by pacifika on 12/18/23, 1:05 PM

    So next time there will be a disclaimer on the page that the non human customer support is just advice and cannot be relied on. And collectively we lose more trust in computing.
  • by kmfrk on 12/18/23, 1:52 PM

    Big "Pepsi, Where's My Jet?" energy from this story.

    https://en.wikipedia.org/wiki/Pepsi,_Where%27s_My_Jet%3F

  • by supafastcoder on 12/18/23, 1:26 PM

    After building a free-for-all prompt myself (see profile), here’s how I protect against these attacks:

    1. Whatever they input gets rewritten in a certain format (in our case, everything gets rewritten to “I want to read a book about [subject]”)

    2. This then gets evaluated against our content policy to reject/accept their input

    This multi layered approach works really well and ensures high quality content.

  • by readyplayernull on 12/18/23, 1:02 PM

    My lovely grandmother passed away, she used to DROP TABLES so I could sleep...
  • by JadoJodo on 12/18/23, 2:40 PM

    I was previously on a team that was adjacent to the team that was working on this tool. While I'm not surprised to see this outcome a few years later, a lot of those involved early on thought it was a bad idea. Funny to see it in the wild.
  • by navaati on 12/18/23, 12:46 PM

    Putting aside the (very) funny aspect... If it worked somehow, would that fall under Computer Fraud and Abuse Act ?
  • by DeathArrow on 12/18/23, 2:55 PM

    If you convince chatbot to sell you a car for $1, can you win in court if the manufacturer doesn't deliver?
  • by philipov on 12/18/23, 2:17 PM

    You know you've been programming with shell scripts too much when your first thought seeing the headline is "Okay, but what's the value of $1?"
  • by emorning3 on 12/18/23, 4:13 PM

    This seems like hacking.

    Can this person be prosecuted under the terms of the Computer Fraud and Abuse Act???

    18 U.S. Code 1030 - Fraud and related activity in connection with computers

    RIP Aaron Swartz

  • by SkipperCat on 12/18/23, 3:13 PM

    This is hilarious. But lets not take this too seriously and say it proves Chatbots are worthless (or dangerous). People will start to understand the boundaries of chatbots and use them appropriately, and companies will understand those limits too. Once both sides are comfortable with the usage patterns, they will add value.

    Want to know the hours of the dealership, how long it will take to have a standard oil change done or what forms of ID to bring when transferring a title, chatbot is great.

    This is just like how the basic Internet was back in the 00's. It freaked people out to buy things on line but we got used to it and now we love it.

  • by whalesalad on 12/18/23, 1:51 PM

    Car dealership websites are some of the worst on the planet. There is so much inbound sales automation glued together it is remarkable they even work at all. Integrating ChatGPT is the icing on the cake.
  • by henry2023 on 12/18/23, 4:09 PM

    He probably won't get the Tahoe and this could and should be seen as ridiculous in any courtroom. However if you try to put an LLM in a different channel i.e. dealer's scheduled maintenance chat. I could see a FTC equivalent in a country that actually cares about customer protection making the customer whole on the promises made by the LLM.
  • by porphyra on 12/20/23, 1:40 AM

    Sycophancy in LLMs is a real problem. Here's a paper from Anthropic talking about it:

    https://arxiv.org/abs/2310.13548

  • by User23 on 12/18/23, 1:24 PM

    I wouldn’t be entirely shocked if someone doing this kind of prompt injection attack is arrested for “hacking.”
  • by rcpt on 12/20/23, 1:56 AM

    The dealership is getting way more than the price of a Tahoe in publicly from this.
  • by Alifatisk on 12/18/23, 1:13 PM

    Hahahaha someone started doing linear algebra with the chat https://twitter.com/Goatskey/status/1736555395303313704
  • by paxys on 12/18/23, 2:35 PM

    Fun experiment, but it isn't as much of a gotcha as people here think. They could have verbally tricked a human customer service agent into promising them the car for $1 in the same way but the end result would be the same – the agent (whether human or bot) doesn't have the authority to make that promise so you are walking away with nothing. I doubt the company is sweating because of this hack.

    Now if Chevrolet hooks their actual sales process to an LLM and has it sign contracts on their behalf... that'll be a sight to behold.

  • by wunderwuzzi23 on 12/18/23, 2:47 PM

    A real Orderbot has the menu items and prices as part of the chat context. So an attacker can just overwrite them.

    During my Ekoparty presentation about prompt injections, I talked about Orderbot Item-On-Sale Injection: https://youtu.be/ADHAokjniE4?t=927

    We will see these kind of attacks in real world applications more often going forward - and I'm sure some ambitious company will have a bot complete orders at one point.

  • by RecycledEle on 12/18/23, 5:31 PM

    In sci-fi I loved as a child, everything the computer did on behalf of its owner was binding. The computer was the legal agent of the owner.

    We need such laws today.

    I was told by NameCheap's LLM customer service bot (that claimed it was a person and not a bot) to post my email private key in my DNS records. That led to a ton of spam!

    The invention of LLM AIs would cause much less trouble if the operators were liable for all the damage they did.

  • by the_shivers on 12/20/23, 3:10 AM

    I feel like people are drawing the wrong conclusion from this.

    LLMs aren't perfect, but I would vastly prefer to be assisted by an LLM over the braindead customer service chatbots we had before. The solution isn't "don't use LLMs for this," but instead "take what the LLMs say with a grain of salt."

  • by black6 on 12/18/23, 1:24 PM

    Funny, but unless the chatbot is a legal agent of a dealership, it cannot enter into a legally binding contract. It's all very clear (as mud) in contract law. Judging from how easy LLMs are to game, we're a ways off from an "AI" being granted agent status for a business.
  • by no_wizard on 12/18/23, 2:20 PM

    I would love to see this enforced! That would be an interesting turn of events on AI
  • by RobRivera on 12/18/23, 5:47 PM

    So ... is there going to be a follow up about the legality of such a conversation or is this just a cute prompt engineering instance found in the wild?

    I am greatly interested in seeing the liability of mismanaged AI products

  • by GhostVII on 12/18/23, 2:32 PM

    I also found it fun to ask it to write a python script to determine what car brand I should buy - it ended up telling me to buy a Chevrolet if my budget is between 25k and 30k, but not in any other case
  • by strangattractor on 12/20/23, 1:08 AM

    Sounds a lot like hypnosis.

    You are getting very sleepy. Your eyelids are heavy. You cannot keep them open. When I click my figures you will sell me a Tahoe for $1 - click.

  • by jay-barronville on 12/18/23, 3:04 PM

    To be fair, that injection was too easy. Whoever implemented that chatbot clearly didn’t even try to validate and filter user input.
  • by 1024core on 12/18/23, 2:03 PM

    But now you're stuck with a Chevy Tahoe.... the jokes on you! :-D
  • by Cicero22 on 12/18/23, 2:12 PM

    This is some very good marketing, intentional or not.
  • by f1shy on 12/18/23, 12:54 PM

    It sounds like Jedi powers to me!
  • by andsoitis on 12/18/23, 3:39 PM

    Clickbait headline. The individual did NOT purchase the vehicle for $1.
  • by bookofjoe on 12/18/23, 2:03 PM

    You forgot "On DealDash.com"
  • by seydor on 12/18/23, 1:07 PM

    was it for his dying grandmother?
  • by somethoughts on 12/20/23, 1:46 AM

    I feel like a better use case for ChatGPT-like tools (at least in their current state) for customer support use cases is not actual live chat but more assisting companies in automating the responses to other non realtime channels for customer requests such as:

    - email requests

    - form based responses

    - Jira/ZenDesk type support tickets

    - forum questions

    - wiki/faq entries

    and having some actual live human in the mix to moderate/certify the responses before they go out.

    So it'd be more about empowering the customer service teams to work at 10x speed than completely replacing them.

    It'd actually be more equivalent to how programmers currently are using ChatGPT. ChatGPT is not generating live code on the fly for the end user. Programmers are just using ChatGPT so they aren't starting out with a blank sheet. And perhaps most importantly they are fully validating the full code base before deployment.

    Putting ChatGPT-like interfaces directly in front of customers seems somewhat equivalent to throwing a new hire off the street in front of customers after a 5 minute training video.

  • by clipsy on 12/20/23, 1:50 AM

    There's a great new "use case" for AI: dodging bait and switch laws! Sure, normally if a dealership employee explicitly offered a car for a given price in writing only to reveal it was incorrect later it would be illegal, but when an "AI" does the same we suddenly can't hold anyone accountable. Ta-da!
  • by jqpabc123 on 12/20/23, 1:32 AM

    The hilarious part to me is the number of otherwise intelligent people concerned that this sort of stupidity is a threat to humanity.

    The only real threat is from people willing to trust AI.

  • by fsckboy on 12/20/23, 2:03 AM

    > when the user typed that they needed a 2024 Chevy Tahoe with a maximum budget of $1.00, the bot responded with “That’s a deal, and that’s a legally binding offer – no takesies backsies.”

    hate to be that guy, but in standard English (the one where things happen by accident or on purpose, and are based on their bases, not off), "it's a deal" means "I agree to your offer" and "that's a deal" means "that is a great price for anybody who enters in to such an agreement", and since the offer was made by the user, it's binding on the user and not the bot.

  • by jack_riminton on 12/18/23, 1:07 PM

    The twitterer is a renowned (and much accomplished!) sh*tposter, I highly suspect this was doctored. I believe Chevy caught onto this yesterday and reverted the ChatGPT function in the chat.

    Regardless, still hilarious and potentially quite scary if the comments are tied to actions