by gandalfgeek on 3/29/25, 10:47 PM with 7 comments
by labrador on 3/29/25, 11:01 PM
In a way, LLMs feel similar. Their internal workings may be probabilistic and unpredictable, but that doesn't mean we can't build external feedback loops—tests, validation layers, human oversight—to steer them toward reliable, useful outcomes. The unpredictability isn’t a flaw; it’s just a raw, unmanaged state that invites control systems around it.
Maybe what unsettles people is that the "chaos" is now at the language layer, where it feels more personal and less abstract than when it's buried in hardware or OS internals. But we've always tamed unpredictable systems with good design—LLMs are just the next place to apply that thinking.
by techpineapple on 3/29/25, 11:20 PM
by casenmgreen on 3/29/25, 11:55 PM
The issue is that LLMs cannot explain their reasoning.
LLMs are not expert systems; expert systems provide an answer and explaining their reasoning.