from Hacker News

Insurers launch cover for losses caused by AI chatbot errors

by jmacd on 5/11/25, 10:07 AM with 64 comments

  • by tomrod on 5/11/25, 11:31 AM

  • by loeber on 5/13/25, 5:29 PM

    Insurance tech guy here. This is not the revolutionary new type of insurance that it might look like at first glance. It's an adaptation of already-commonplace insurance products that are limited in their market size. If you're curious about this topic, I've written about it at length: https://loeber.substack.com/p/24-insurance-for-ai-easier-sai...
  • by conartist6 on 5/11/25, 11:55 AM

    Man I wish I could get insurance like that. "Accountability insurance"

    You were responsibile for something, say, child care, and you just decided to go for beer and leave the child with an AI. The house burns down, but because you had insurance you are not responsible. You just head along to your next child care job and don't too much worry about it.

  • by imoverclocked on 5/13/25, 7:18 PM

    At best, this screams, “you’re doing it wrong.”

    We know this stuff isn’t ready, is easily hacked, is undesirable by consumers… and will fail. Somehow, it’s still more efficient to cover losses and degrade service than to approach the problem differently.

  • by Neywiny on 5/11/25, 11:23 AM

    No mercy. Had to deal with one when looking for apartments and it made up whatever it thought I wanted to be right. Good thing they still had humans around in person when I went for a tour.
  • by DonHopkins on 5/13/25, 8:41 PM

    Can consumers get AI insurance that covers eating a pizza with glue on it, or eating a rock?

    https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-...

    How about MAGA insurance that covers injecting disinfectant, or eating horse dewormer pills, or voting for tariffs?

  • by fsfod on 5/14/25, 2:19 AM

    I wonder if the premiums scale up depending on the temperature used for the model output.
  • by JumpCrisscross on 5/13/25, 11:57 PM

    Oooh, the foundation-model developers could offer to take first losses up to X if developers follow a rule set. This would reduce premiums and thus increase uptake among users of their models.
  • by 85392_school on 5/13/25, 9:35 PM

    Reading the actual article, this seems odd. It only covers cases when the models degrade, but there hasn't been evidence of a LLM pinned to a checkpoint degrading yet.
  • by yieldcrv on 5/13/25, 10:28 PM

    AI that hallucinates accurately enough times should just carry Errors and Omissions insurance like human contractors do
  • by AzzyHN on 5/14/25, 4:13 PM

    I wonder who makes more errors, underpaid & undertrained employees, or AI chatbots.
  • by otabdeveloper4 on 5/14/25, 7:42 PM

    Whew. Somebody finally figured out how to make money off the nu-AI bubble.
  • by vfclists on 5/14/25, 2:14 PM

    Pretty sure it will wind up like insurance against malware like NotPetya.
  • by aatd86 on 5/14/25, 9:18 PM

    And now with mcp...should make sure to not allow agents access to sensitive capabilities.