from Hacker News

Cognitive Behaviors That Enable Self-Improving Reasoners

by delifue on 3/6/25, 1:33 AM with 103 comments

  • by owenpalmer on 3/6/25, 3:21 AM

    > four key cognitive behaviors -- verification, backtracking, subgoal setting, and backward chaining -- that both expert human problem solvers and successful language models employ.

    As we make AI better, perhaps we'll inadvertently find ways to make HI (human intelligence) better too.

    I had a personal experience with this when I was studying for an exam recently. As I read over practice questions, I spoke aloud, replicating the reasoning methods/personality of Deepseek R1. By spending a lot of time reading long verbose R1 outputs, I've essentially fine-tuned my brain for reasoning tasks. I believe this method contributed to my excellent score on that exam.

  • by meindnoch on 3/6/25, 9:43 AM

    At this point I can't tell from the title whether it's a self-help psychology fad or an LLM paper.
  • by robocat on 3/6/25, 3:20 AM

    How much has our knowledge of AI training techniques helped to discover how to train people to think better?
  • by nickpsecurity on 3/6/25, 5:58 AM

    "models primed with incorrect solutions containing proper reasoning patterns achieve comparable performance to those trained on correct solutions"

    One of the parts most worth a replication study.

  • by idiotsecant on 3/6/25, 5:40 AM

    I sometimes see these reddit threads of people talking about the experience of having an internal monologue. I have no such monologue, at least not one that is accessible to the part of my mind that calls itself 'me', but I have often wondered if that monologue is something like a 'chain of thought'. I feel like maybe without access to that 'idea feed' maybe my planning and executive functioning is less effective than some other people. I do find myself quite more effective with those sort of tasks when I do a little 'chain of thought' notepad.

    I also suspect I spend less time ruminating and second-guessing myself and other anxious behaviours that I imagine would come with having someone talking in your ear all day, but that's probably off topic.

  • by spwa4 on 3/6/25, 2:29 PM

    True, but a problem is that self-improving AI leads to a somewhat troubling mode of thinking. AIs switch to an internal babbling type language that makes no sense but clearly still conveys meaning to the AIs, then think in that language (if it's a language, though not sure what else it could be) and then produce correct results.

    Worse, when you use multiple agents to get AI LLMs talking to one another, all AI agents switch to this internal language and they make progress despite no human understanding what hell is happening. This seems very bad.

    Illustration:

    > How many r in strawberry?

    I'm asked how many r in strawberry. I can just spell the word and a;dklsjaw; a;ewjraqwpeouypaads;lq qepwiouryaqeopw qewrpoiuyoiauysdqw145124rfa.nkjlwh ;45a8345a894ya4a q4p58q45jaq;lkjas;dlfkja;j

    <answer>There are 3 (three) r's in strawberry</answer>

  • by miksik on 3/6/25, 9:50 AM

    > four key cognitive behaviors -- verification, backtracking, subgoal setting, and backward chaining -- that both expert human problem solvers and successful language models employ.

    Based on what have they claimed that such methods are used by expert human problem solvers?

  • by glass_door on 3/6/25, 1:13 PM

    Does this also mean giving better system prompts that encourage this behaviour also substantially help?
  • by kittikitti on 3/6/25, 9:46 PM

    ``think''

    In the abstract they use different characters for double quotes here.