from Hacker News

Ask HN: What is your current LLM-assisted coding tool?

by HiPHInch on 6/2/25, 4:06 PM with 4 comments

Hello everyone, I’m testing and comparing various LLM-assisted coding tools, and I want to know which tool you are currently using in your daily development workflow. Here are some observations and questions I have:

1. Cursor and Windsurf

   - Both work nicely on local, but they use token-saving strategies:

     - With very long context, they may truncate important information, causing the suggested code to miss key details.

     - Even in normal scenarios, complex cases might exceed context or quota limits, interrupting suggestions.
2. “Roo Code” and API-based approaches

   - Directly calling paid APIs (e.g., OpenAI’s ChatGPT/GPT-4 API) works well but is expensive.

   - Some free or community APIs (open-source mirrors, community editions) can be unstable, rate-limited, or slow.
3. Augment Code - It’s said to be one of the most “intelligent” commercial products, but it’s also costly.

   - Many recommend its ability to rewrite, refactor, generate tests, etc., but for simple code completion, its cost-performance ratio may be lower than some smaller vendors or open-source plugins.
4. Refact.ai

   - Listed at the top on SWE Bench, it claims to support code refactoring, generating comments via LLMs, batch rewrites, and more.

   - However, it seems rarely discussed in developer circles. How well does it support?
Questions for the community:

- Which LLM-assisted coding tool are you currently using? (IDE plugin, standalone client, or API-based)

- What are the main reasons for choosing it? (e.g., cost, response speed, context length support, feature set, etc.)

- What pros and cons have you encountered during actual development? Specifically, how does it perform for debugging, refactoring, generating unit tests, automatic bug fixes, etc.?

- If you have switched tools before, why did you switch?

Thank you for sharing your experiences!

  • by jasonthorsness on 6/2/25, 4:38 PM

    I keep CoPilot-based autocomplete turned on everywhere and occasionally use the inline chat with Claude Sonnet 4 but honestly for a lot of stuff I'm still stuck on copy/pasting from ChatGPT o4-mini-high. I've found I can get the best responses when I fully control the context and inputs independent from my workspace.
  • by garbagecoder on 6/2/25, 4:09 PM

    I've been trying blackbox.ai in VSCode the last little while. In my very limited experience it either gets it close to right on the first few tries or gets stuck in a kind of testing loop, making more an ornate tests that still miss one critical bug.

    It's the first sort of "magic wand" coding AI I've used, where in the past I just would ask questions of ChatGPT or Gemini.

  • by runjake on 6/3/25, 1:10 AM

    I use Cursor and GitHub Copilot as “fancy auto complete”. On rare occasions I use their chat agent but I run into context limits and I generally don’t like LLMs going in and modifying code I’ve already written.

    I also use Claude to ask general code questions, to explain something, give me ideas or code examples.

    I am comfortably behind the state of the art.

  • by bawolski on 6/3/25, 10:21 PM

    Cline with Gemini flash, Gemini pro when flash isn’t enough to push something through