by zentara_code on 6/9/25, 5:42 PM with 2 comments
So how can I get them out of this loop of wrong conclusions? I need to feed them new, different context. And to find the real root cause, they should have more information. They should be able to investigate and experiment with the code. One proven tool that seasoned software engineers use is a debugger, which allows you to inspect stack variables and the call stack.
So I looked for existing solutions. An interesting approach is the MCP server with debugging capability. However, I was not able to make it work stably in my setup. I used the Roo-Code extension, which communicates with the MCP server extension through remote transport, and I had problems with communication. Most MCP solutions I see use stdio transport.
So I decided to roll up my sleeves, integrate the debugging capabilities into my favorite code agent, Roo-Code, and give it a name: Zentara-Code.
Zentara-Code can write code like Roo-Code, and it can debug the code it writes through runtime inspection.
My previous discussion regarding Zentara is here: https://www.reddit.com/r/LocalLLaMA/comments/1l1ggkp/demo_i_...
I would love to hear your experience and feedback. It would be great if you could test it in different languages.
Documentation: zentar.ai
Github: github.com/Zentar-Ai/zentara-code/
VS Code Marketplace: marketplace.visualstudio.com/items/?itemName=ZentarAI.zentara-code
by careful_ai on 6/11/25, 9:14 AM
The idea that Zentara can drive VS Code’s debugger—setting breakpoints, stepping into stack frames, inspecting variables, even modifying execution state—elevates it far beyond most LLM-based tools. It closes the loop between generate → execute → analyze → fix.
What’s especially compelling is the human-in-the-loop model. The author describes how they guided the AI: “99.9% written by AI, but still requires us to review, correct, and sometimes prompt the AI”. That mirrors the consensus on HN—AI shines as a partner, not a replacement.
Curious how it performs on non-trivial codebases or frameworks with heavy I/O or async flows. And from an enterprise perspective, this kind of system could be complemented by tools like Techolution’s Project Analyzer—automatically surfacing trouble spots while engineers steer the refactors and safety checks.
Overall: this is a solid step toward smart coding agents—runtime-aware, interactive, and still human-guided. Would love to hear others’ experiences running it on messy monorepos or legacy stacks.