from Hacker News

Give o1 ten times more context to get value from it

by dagorenouf on 1/14/25, 1:57 PM with 1 comments

  • by dtagames on 1/14/25, 3:45 PM

    I've stumbled on to this approach myself and it works. Instead of asking one question at a time or trying to break down my problem, I try to give as much background and context as possible -- often in the same long paste.

    I arrived at this out of frustration when, during complex debugging, ChatGPT would forget important parts of the issue -- and wind up removing or breaking something while fixing something else. Evolving an answer one piece at a time, like we used to do, no longer works with these "reasoning" models.

    Now, I give the World's Largest Post with each problem. I'll explain my issue, what I've tried, what I want, what the purpose (non code, human purpose) of it is, what I don't want to do or am trying to avoid -- all of it. My brain dump, plus the entire file or files needed to fix it.

    When I do this, I get back solutions that take into account all of these factors and I can see in the answer explanations that all my criteria were met. This has reduced the back-and-forth on complex coding (ie: write a 200 line module from scratch) and often gives a 100% working output on the first shot.