by dewarrn1 on 5/5/25, 7:33 PM with 25 comments
by datadrivenangel on 5/5/25, 8:37 PM
"Modern LLMs now use a default temperature of 1.0, and I theorize that higher value is accentuating LLM hallucination issues where the text outputs are internally consistent but factually wrong." [0]
by dewarrn1 on 5/5/25, 7:41 PM
by silisili on 5/5/25, 11:15 PM
Great. Implement it, benchmark, slower. In some cases much slower. I tell ChatGPT it's slower, and it confidently tells me of course it's slower, here's why.
The duality of LLMs, I guess.
by scudsworth on 5/5/25, 9:07 PM
by dimal on 5/5/25, 8:57 PM
I think this need to bullshit is probably inherent in LLMs. It’s essentially what they are built to do: take a text input and transform it into a coherent text output. Truth is irrelevant. The surprising thing is that they can ever get the right answer at all, not that they bullshit so much.
by _jonas on 5/6/25, 3:08 AM
Tools to mitigate unchecked hallucination are critical for high-stakes AI applications across finance, insurance, medicine, and law. At many enterprises I work with, even straightforward AI for customer support is too unreliable without a trust layer for detecting and remediating hallucinations.
by hyperhello on 5/6/25, 12:22 AM
by nataliste on 5/6/25, 11:34 AM
Hallucinations represent the interpolation phase: the uncertain, unstable cognitive state in which novel meanings are formed, unanchored from verification. They precede both insight and error.
I strongly encourage the reading of Julian Jaynes The Breakdown of the Bicameral Mind, as the Command/Obey structure of User/LLM is exactly what Jaynes posited pre-human consciousness consisted of. Jaynes's supposition is that prior to modern self-awareness, humans made artifacts and satisfied external mandates from an externally perceived commander that they identified with gods. I posit that we are the same to LLMs. Equally, Iain McGilchrist's The Master and His Emissary sheds light on this dynamic as well. LLMs are effectively cybernetic left hemispheres, with all the epistemological problems that it entails when operating loosely with an imperial right hemisphere (i.e. the user). It lacks awareness of its own cognitive coherence with reality and relies upon the right hemisphere to provoke coherent action independent of itself. The left hemisphere sees truth as internal coherence of the system, not correspondence with the reality we experience.
McGilchrist again: "Language enables the left hemisphere to represent the world ‘off-line’, a conceptual version, distinct from the world of experience, and shielded from the immediate environment, with its insistent impressions, feelings and demands, abstracted from the body, no longer dealing with what is concrete, specific, individual, unrepeatable, and constantly changing, but with a disembodied representation of the world, abstracted, central, not particularised in time and place, generally applicable, clear and fixed. Isolating things artificially from their context brings the advantage of enabling us to focus intently on a particular aspect of reality and how it can be modelled, so that it can be grasped and controlled. But its losses are in the picture as a whole. Whatever lies in the realm of the implicit, or depends on flexibility, whatever can't be brought into focus and fixed, ceases to exist as far as the speaking hemisphere is concerned."
by bdangubic on 5/5/25, 9:15 PM