by 0x63_Problems on 11/14/24, 4:01 PM with 240 comments
by perrygeo on 11/14/24, 4:50 PM
This mirrors my experience using LLMs on personal projects. They can provide good advice only to the extent that your project stays within the bounds of well-known patterns. As soon as your codebase gets a little bit "weird" (ie trying to do anything novel and interesting), the model chokes, starts hallucinating, and makes your job considerably harder.
Put another way, LLMs make the easy stuff easier, but royally screws up the hard stuff. The gap does appear to be widening, not shrinking. They work best where we need them the least.
by dkdbejwi383 on 11/14/24, 4:51 PM
I'd argue that a lot of this is not "tech debt" but just signs of maturity in a codebase. Real world business requirements don't often map cleanly onto any given pattern. Over time codebases develop these "scars", little patches of weirdness. It's often tempting for the younger, less experienced engineer to declare this as tech debt or cruft or whatever, and that a full re-write is needed. Only to re-learn the lessons those scars taught in the first place.
by swatcoder on 11/14/24, 5:46 PM
Wow. It's hard to believe that people are earnestly supposing this. From everything we have evidence of so far, AI generated code is destined to be a prolific font of tech debt. It's irregular, inconsistent, highly sensitive to specific prompting and context inputs, and generally produces "make do" code at best. It can be extremely "cheap" vs traditional contributions, but gets to where it's going by the shortest path rather than the most forward-looking or comprehensive.
And so it does indeed work best with young projects where the prevailing tech debt load remains low enough that the project can absorb large additions of new debt and incoherence, but that's not to the advantage of young projects. It's setting those projects up to be young and debt-swamped much sooner than they would otherwise be.
If mature projects can't use generative AI as extensively, that's going to be to their advantage, not their detriment -- at least in terms of tech debt. They'll be forced to continue plodding along at their lumbering pace while competitors bloom and burst in cycles of rapid initial development followed by premature seizure/collapse.
And to be clear: AI generated code can have real value, but the framing of this article is bonkers.
by LittleTimothy on 11/14/24, 8:02 PM
Instead of genAI doing the rubbish, boring, low status part of the job, you should do the bits of the job no one will reward you for, and then watch as your boss waxes lyrical about how genAI is amazing once you've done all the hard work for it?
It just feels like if you're re-directing your efforts to help the AI, because the AI isn't very good at actual complex coding tasks then... what's the benefit of AI in the first place? It's nice that it helps you with the easy bit, but the easy bit shouldn't be that much of your actual work and at the end of the day... it's easy?
This gives very similar vibes to: "I wanted machines to do all the soul crushing monotonous jobs so we would be free to go and paint and write books and fulfill our creative passions but instead we've created a machine to trivially create any art work but can't work a till"
by nimish on 11/14/24, 5:26 PM
Machine learning is the high interest credit card of technical debt.
by mkleczek on 11/14/24, 6:19 PM
So every time we generate the same boilerplate we really do copy/paste adding to maintenance costs.
We are amazed looking at the code generation capabilities of LLMs forgetting the goal is to have less code - not more.
by yuliyp on 11/14/24, 4:48 PM
by tired_and_awake on 11/14/24, 8:16 PM
We are a long ways from automating our jobs away, instead our expertise evolves.
I suspect doctors go through a similar evolution as surgical methods are updated.
I would love to read or participate in the discussion of how to be strategic in this new world. Specifically, how to best utilize code generating tools as a SWE. I suppose I can wait a couple of years for new school SWEs to teach me, unless anyone is aware of content on this?
by inSenCite on 11/14/24, 10:47 PM
The blind copy-paste has generally been a bad idea though. Still need to read the code spit out, ask for explanations, do some iterating.
by bob1029 on 11/14/24, 5:16 PM
> This experience has lead most developers to “watch and wait” for the tools to improve until they can handle ‘production-level’ complexity in software.
You will be waiting until the heat death of the universe.
If you are unable to articulate the exact nature of your problem, it won't ever matter how powerful the model is. Even a nuclear weapon will fail to have effect on target if you can't approximate its location.
Ideas like dumpstering all of the codebase into a gigantic context window seem insufficient, since the reason you are involved in the first place is because that heap is not doing what the customer wants it to do. It is currently a representation of where you don't want to be.
by amelius on 11/14/24, 4:59 PM
Because with AI you can turn any problem into a black box. You build a model, and call it "solved". But then reality hits ...
by vander_elst on 11/14/24, 4:45 PM
I thought that at the beginning the code might be a bit messy because there is the need to iterate fast and quality comes with time, what's the experience of the crowd on this?
by byyoung3 on 11/15/24, 7:20 AM
The codebases that use the MOST COMMONLY USED LIBRARIES benefit the most from generative AI tools
by leptons on 11/14/24, 5:00 PM
by squillion on 11/14/24, 10:08 PM
I completely agree. That's why my stance is to wait and see, and in the meanwhile get our shit together, as in make our code maintainable by any intelligent being, human or not.
by phillipcarter on 11/14/24, 5:28 PM
Missing test? Great, I'll get help identifying what the code should be doing, then use AI to write a boatload of tests in service towards those goals. Then I'll use it to help refactor some of the code.
But unlike the article, this requires actively engaging with the tool rather than, as they say a "sit and wait" (i.e., lazy) approach to developing.
by Halan on 11/14/24, 7:33 PM
For example a RAG pipeline. People are rushing things to market that are not built to last. The likes of LangChain etc. offer little software engineering polishing. I wish there were a more mature enterprise framework. Spring AI is still in the making and Go is lagging behind.
by yawnxyz on 11/14/24, 7:50 PM
Asking it for higher level planning / architecture is just asking for pain
by browningstreet on 11/14/24, 5:21 PM
by grahamj on 11/14/24, 5:01 PM
OTOH if devs are getting the simpler stuff done faster maybe they have more time to work on debt.
by stego-tech on 11/14/24, 6:07 PM
LLMs can’t understand why your firewall rules have strange forwards for ancient enterprise systems, nor can they “automate” Operations on legacy systems or custom implementations. The only way to fix those issues is to throw money and political will behind addressing technical debt in a permanent sense, which no organization seemingly wants to do.
These things aren’t silver bullets, and throwing more technology at an inherently political problem (tech debt) won’t ever solve it.
by ImaCake on 11/14/24, 11:28 PM
I find this works because its much easier to debug a subtle GPT bug in a well validated interface than the same bug buried in a nested for loop somewhere.
by btbuildem on 11/14/24, 6:53 PM
This is for tiny code snippets, hello-world size, stringing together some primitives to render relatively simple objects.
Turns out, if the codebase / framework is a bit obscure and poorly documented, even the genie can't help.
by kazinator on 11/14/24, 6:03 PM
So you say, but {citation needed}. Stuff like this is simply not known yet.
AI can easily be applied in legacy codebases, like to help with time-consuming refactoring.
by heisenbit on 11/15/24, 7:13 AM
by rsynnott on 11/14/24, 5:05 PM
Or, y'know, just not bother with any of this bullshit. "We must rewrite everything so that CoPilot will sometimes give correct answers!" I mean, is this worth the effort? Why? This seems bonkers, on the face of it.
by singingfish on 11/14/24, 8:35 PM
by r_hanz on 11/14/24, 5:55 PM
by sfpotter on 11/15/24, 12:30 AM
by alberth on 11/14/24, 6:05 PM
This isn't AI doing.
It's the doing of adding any new feature to a product with existing tech debt.
And since AI for most companies is a feature, like any feature, it only makes the tech debt worse.
by teapot7 on 11/15/24, 8:32 AM
Sheesh! The Lizard People walk among us.
by Sparkyte on 11/15/24, 8:56 AM
by j45 on 11/15/24, 1:22 AM
by senectus1 on 11/15/24, 2:01 AM
by tux1968 on 11/14/24, 6:06 PM
While there is no guarantee that the same trajectory is true for programming, we need to heed how emotionally attached we can be to denying the possibility.
by baydonFlyer on 11/15/24, 12:09 PM
by wordofx on 11/15/24, 3:54 AM
by paulsutter on 11/14/24, 9:59 PM
Creating react pages is the new COBOL
by anon-3988 on 11/15/24, 2:56 AM
by mrbombastic on 11/14/24, 8:09 PM
by honestAbe22 on 11/14/24, 11:51 PM
by eesmith on 11/14/24, 4:46 PM
How does one determine if that's even possible, much less estimate the work involved to get there?
After all, 'subtle control flow, long-range dependencies, and unexpected patterns' do not always indicate tech-debt.
by jvanderbot on 11/14/24, 8:10 PM
by mouse_ on 11/14/24, 8:24 PM
"GARBAGE IN -- GARBAGE OUT!!"
by sheerun on 11/14/24, 5:13 PM
by p0nce on 11/14/24, 5:31 PM
by benatkin on 11/14/24, 5:43 PM
by NitpickLawyer on 11/14/24, 5:08 PM
The tooling for this will only improve.
by ssalka on 11/14/24, 10:29 PM
by 42lux on 11/14/24, 5:15 PM
by sitzkrieg on 11/14/24, 10:26 PM
by luckydata on 11/14/24, 5:01 PM
I'm sure nothing will change in the future either.
by TechDebtDevin on 11/15/24, 5:34 AM
by dcchambers on 11/14/24, 5:12 PM
The moment you need to do something novel or complicated they choke up.
This is why I'm not very confident that tools like Vercel's v0 (https://v0.dev/) are useful for more than just playing around. It seems very impressive at first glance - but it's a mile wide and only an inch deep.