from Hacker News

New OpenAI models hallucinate more

by pixiemaster on 4/19/25, 11:40 AM with 2 comments

  • by 8474_s on 4/19/25, 4:36 PM

    Looks like latest chatGPT likes to hallucinate code segments that don't work or wrong API calls. Simpler instructions work fine, but at some point it starts to diverge and experiment.
  • by jqpabc123 on 4/19/25, 11:57 AM

    Is it the models or the fact that earlier hallucinations are starting to spread, accumulate and contaminate training data?

    Performance degradation over time has been a recognized concern for a while now.