from Hacker News

The more sophisticated AI models get, the more likely they are to lie

by einarfd on 10/5/24, 8:38 AM with 2 comments

  • by jqpabc123 on 10/5/24, 9:31 AM

    In other words, answers derived from statistical processes are not very reliable.

    Who knew?

    In some ways, LLMs are anti-computers. They negate much of the utility that made computing popular --- instead of reliable answers at low cost, we get unreliable answers at high cost .

  • by richrichie on 10/5/24, 9:04 AM

    It is wild how humanised neural networks have become! The use of terms like “lying” or “hallucination” even in research setting is going to be problematic. I can’t articulate well, but it is going to restrict our ability to problem solve.