by prasoonds on 10/2/24, 3:31 PM with 4 comments
Normally, I'm working in a small part of the codebase and I can give really concrete instructions and the right context to Cursor. Here, it works really well.
But, when I ask it questions are are more generic (where's the code that does X/how can I do X?/Implement <functionality> by using X and Y), it often hallucinates or gives me wrong answers.
I can see that it tries to find the right context to send to the LLM. Sometimes, it does find the right context, other times it doesn't. Even when it does find the right context, I'm guessing it's just too much context that gets sent to the LLM so it just ends up hallucinating.
Have you had this same problem with whichever AI coding tool you use? I'm wondering if the problem is specific to the legacy + large codebase I'm working with or is it a more general thing with unseen code that the LLM hasn't seen in it's training data.
by dgosling56 on 10/5/24, 4:08 AM
https://chromewebstore.google.com/detail/rocky-ai/fdjoklehji...
by skeptrune on 10/2/24, 3:41 PM
We have a relatively large Rust codebase and I would describe the LLM's as actively harmful at making agentic-style, self-piloted changes.
Very pointed context management with split screens and explicit inclusion of relevant files in the context is sometimes useful though.
My biggest gripe is that AI can't just generate a new SQL migration and CRUD routes for an additional resource we want to add without issue. I always have to punt and end up doing it myself.
by prasoonds on 10/2/24, 3:34 PM
- I've found LLM codegen working very well with standard React + TS code
- It sucks when using less knows languages or less popular frameworks (I tried reverse engineering github copilot lua code from the neovim extension in one instance and it really didn't work well)
I'd be curious to hear people's experience here.