by zora_goron on 1/2/25, 5:35 AM with 12 comments
by ithkuil on 1/2/25, 9:25 AM
It reminds me of:
"if you're thinking too much, write; if you're not thinking enough, read"
It's as if the act of writing engages yourself in a sort of conversation with the future reader.
by treetalker on 1/2/25, 12:48 PM
Sigh.
Accurate responses in such situations would be useful for busy professionals in low-stakes scenarios.
But LLMs cannot replace the effect on the human mind that results from actually reading, understanding, and thinking about a text. There is no substitute: we must do our own thinking, because it is the work that matters — the journey, not the destination, yields the benefits.
by latexr on 1/2/25, 1:55 PM
Perhaps consider you suffer from incredibly short attention span. Thinking for five minutes does not exhaust all the thoughts you are going to have on a topic, and if you spend five minutes considering the implications of such a though you quickly realise how absurd it is.
History is filled with stories of “shower thoughts” and bolts of inspiration which came from thinking on a topic long and hard and immersing yourself in it. If your idea were true, humanity would still believe the Earth is the center of the Universe and we wouldn’t have computers. Yours is precisely the type of mentality which leads to the proliferation of scams and conspiracy theories. It’s also a worrying trend with LLMs, that people are so willing to turn off their brains sooner and sooner.
by retskrad on 1/2/25, 9:59 AM
by satisfice on 1/2/25, 3:45 PM
I notice little evidence of testing the information that he gets from Claude. From my own testing, which I repeat every so often, I find I cannot rely on anything I get from LLMs. Not anything. Have you tried AI summaries of documents or meetings that you know well? Are you happy with the results? I have not yet seen a summary that was good enough, personally.
Also a lot of example use cases he offers sound like someone who is not very confident in his own thinking, but strangely super-confident in whatever an LLM says ($2000/hr consultant? really?).
Claude cannot perform an inquiry. No LLM can. These tools do not have inquiring minds, nor learning minds. He says hallucinations have reduced. How can he know that, unless he cross-checks everything he doesn’t already know?
I find LLMs exhausting and intellectually infantilizing. From this piece I cannot rule out that there is something very nice about Claude. But I also can’t rule out that there is a certain kind of addictive or co-dependent personality who falls for LLMs for unhealthy reasons primarily.