from Hacker News

Ask HN: Maintaining code quality with widespread AI coding tools?

by raydenvm on 5/11/25, 8:42 AM with 5 comments

I've noticed a trend: as more devs at my company (and in projects I contribute to) adopt AI coding assistants, code quality seems to be slipping. It's a subtle change, but it's there.

The issues I keep noticing: - More "almost correct" code that causes subtle bugs - The codebase has less consistent architecture - More copy-pasted boilerplate that should be refactored

I know, maybe we shouldn't care about the overall quality and it's only AI that will look into the code further. But that's a somewhat distant variant of the future. For now, we should deal with speed/quality balance ourselves, with AI agents in help.

So, I'm curious, what's your approach for teams that are making AI tools work without sacrificing quality? Is there anything new you're doing, like special review processes, new metrics, training, or team guidelines?

  • by sargstuff on 5/11/25, 4:32 PM

    ?? code quality ?? more management quality. AI provides ability to spot possibility of 'issues'/conflicts sooner.

    Really need to be adhering to set of defined specifications (functional / non-functional / domain specific), (work,project, etc). (and/or looking at what level(s) the specifications still relevant, post definition of specifications -- historically via different management levels). Note: doesn't necssarily mean riedgid specs first, code next, document.

    Sigificant coding is "DFA" per setting/defining pre/post environment : repository check-in/out can be setup to do specification checking/diffing for auto-documentation, 'language/project features requirements, aka use, do not use, only use when, never use' can be done/filtered via . Above certain 'size', 're-inventions' would be an AI statisticall inference thing per amount of information.

    Non-DFA aka "context sensitive" stuff : AI would only make sense if way to compare specifications with 'intentions'. aka generate confidence in how much newer coder has been on-boarded relative to coding attempts & project/work specifications. Perhaps also give work place management insite into how relevent things are (vs. "worker is the issue"). aka non-adherance to 'spec' because spec doesn't cover issue(s). Time to review spec. Still need human(s) in loop to figure out the relevant tangibles/intangibles. AI can certainly help identify ambiguities in specifications & how specifications are implimented/used. aka code debt & code drift

  • by mentalgear on 5/11/25, 9:09 AM

    I also share this experience/concern.

    Yet, it could be as easy as having a specialised model which is a code quality checker, refactor-er or QA tester.

    Also, claimify (MS research) could be interesting for isolating claims about what the code should do, and then following up on writing granular unit test coverage.

  • by furrball010 on 5/11/25, 12:17 PM

    I share your concern, but perhaps for a different reason. I think the more code is added, the more problems/bugs emerge, whether a human or AI codes it.

    However, with AI coding tools it's becoming a lot easier to write A LOT of code. And all this code (similar to when a human would write it) adds complexity and bugs. So it's not just the quality, it's also the quantity of code that damages existing code bases (in my view).