by ludovicianul on 4/29/25, 11:32 AM with 89 comments
by aristofun on 4/29/25, 12:06 PM
Looks like the higher the management, the farther away from real engineering work — the more excitement there is and the less common sense and real understanding of how developers and llms work.
> Are you 10x more efficient?
90% of my time is spent thinking and talking about the problem and solutions. 10% is spent coding (sometimes 1% with 9% integrating this into existing infrastructure and processes). Even with ideal AGI coding agent id be only 10% more efficient.
Imagine a very bright junior developer. You still are heavily time taxed mentoring him and communicating.
Not many non technical people (to my surprise) get it.
Based on posts and comments here there are plenty “technical enough” people who don’t understand the essence of engineering work (software engineering in particular).
Spitting out barely (yet) working throwaway grade code is an impressive accomplishment for TikTok, but it has very little to do with complex business critical software most real engineers deal with everyday
by juancn on 4/29/25, 5:41 PM
A large chunk of the work is dealing with people, understanding what do they really want/need and helping them understand it.
On the technical side, most of the work is around fixing issues with existing software (protecting an investment).
Then, maybe 1 to 10% of the workload is making something new.
AI kinda works for the "making something new" part but sucks at the rest. And when it works, it's at most "average" (in the sense of how good it's training set was, it prefers things it sees more commonly regardless of quality).
My gut instinct is that there's going to be an AI crash, much like in the late 90s/early 2000s. Too much hype, and then, after the crash, maybe we'll start to see something a bit more sane and realistic.
by byoung2 on 4/29/25, 12:41 PM
by variadix on 4/29/25, 2:02 PM
In many ways LLMs feel like the next iteration of search engines: they’re easier to use, you can ask follow up questions or for examples and get an immediate response tailored to your scenario, you can provide the code and get a response for what the issue is and how to fix it, you can let it read internal documentation and get specialized support that wouldn’t be on the internet, you can let it read whole code bases and get reasonable answers to queries about said code, etc.
I don’t really see LLMs automating engineers end-to-end any time soon. They really are incapable of deductive reasoning, the extent to which they are is emergent from inductive phenomena, and breaks down massively when the input is outside the training distribution (see all the examples of LLMs failing basic deductive puzzles that are very similar to a well known one, but slightly tweaked).
Reading, understanding, and checking someone else’s code is harder than writing it correctly in the first place, and letting LLMs write entire code bases has produced immense garbage in all the examples I’ve seen. It’s not even junior level output, it’s something like _panicked CS major who started programming a year ago_ level output.
Eventually I think AI will automate software engineering, but by the time it’s capable of doing so _all_ intellectual pursuits will be automated because it requires human level cognition and adaptability. Until then it’s a moderate efficiency improvement.
by jmisavage on 4/29/25, 12:33 PM
As part of this AI-first shift, all engineers now have access to Cursor, and we’re still figuring out how to integrate it. We just started defining .cursorrules files for projects.
What’s been most noticeable is how quickly some people rely too much on AI outputs, especially the first pass. I’ve seen PRs where it’s obvious that the generated code wasn’t even run or reviewed. I know this is part of the messy adjustment period, but right now, it feels like I’m spending more time reviewing and cleaning up code than I did before.
by markus_zhang on 4/29/25, 12:49 PM
We are a team of 5 down from 8 a few months ago, and we are working on more stuffs. I would not be able to survive without AI writing some queries and scripts for me. It really saves a tons of time.
by baq on 4/29/25, 12:12 PM
by gitfan86 on 4/29/25, 1:22 PM
So instead of seeing mass drop in job openings you will see companies that are not bottlenecked by org issues start to move very fast. In general that will create new markets and have a positive effect on kobs
by chrisgd on 4/29/25, 12:20 PM
https://www.theverge.com/news/657594/duolingo-ai-first-repla...
by ilaksh on 4/29/25, 12:51 PM
The leading edge models surpass humans in some ways, but still make weird oversights routinely. I think the models will continue to get bigger and have more comprehensive world models and the remaining brittleness will go away over the next few years.
We are early on in a process that will go from only a few jobs to almost all (existing) jobs very quickly as the models and tools continue to rapidly improve.
by joshuanapoli on 4/29/25, 12:37 PM
by zooom on 4/30/25, 5:55 PM
But the end result will be, once (if) the economy becomes healthy again, businesses will just become more ambitious and software get more hardware intensive and slower. Same ole same ole.
by jonplackett on 4/29/25, 12:25 PM
I don't see any AI yet anywhere near good enough to literally do a person's job.
But I can easily see it making someone, say 20%-50% more effective, based on my own experience using it for coding, data processing, lots of other things.
So now you need 8 people instead of 10 people to do a job.
That's still 2 people who won't be employed, but they haven't been 'replaced' by AI in the way people seem to think they will be.
by scarface_74 on 4/29/25, 12:49 PM
Before LLMs got good enough, there were projects I would scope with the expectation of having one junior consultant do the coding grunt work - simple Lambdas, Python utility scripts, bash scripts, infrastructure as code, translating some preexisting code to the target language of the customer.
This is the perfect use case for ChatGPT. It’s simple well contained work that can fit in its context window, the AWS SDKs in various languages are well documented, there is plenty of sample code, and it’s easy enough to test.
I can tell it to “verify all AWS SDK functions on the web” or give it the links to newer SDK functionality.
I don’t really ever need a junior developer for anything. If I have to be explicit about the requirements anyway, I can use an LLM.
And before the the gate keeping starts, I’ve been coding as a hobby in 1986 and started coding in assembly language then and have been coding professionally since 1996.
by throwaw12 on 4/29/25, 1:10 PM
I don't think it is possible NOW.
But for specific areas, productivity gain you get from a single developer with LLM is much higher than before. Some areas I see it is shining:
* building independent React/UI components
* boilerplate code
* reusing already solved solutions (e.g. try algorithm X,Y,Z. plot the chart in 2D/3D,...)
> What changed significantly in your workflow?
Hiring freeze, because leaders are not sure yet about the gains from AI, what if we hire bunch of people and can't come up with projects for them (not because we are out of ideas, because getting investment is hard if you are not AI company), while LLM is generating so much code.> Are you 10x more efficient? Not always, but I am filtering out things faster giving me opportunity to get into the code concepts sooner (because AI is summarizing it for me before I read 10 page blogpost)
by Balgair on 4/29/25, 2:21 PM
What's changed in the workflow is a lot really. We do a lot of documentation, so most of that boilerplate is not done via AI based workflows. In the past, that would have been one of us copy-pasting from older documents for about a month. Now it takes seconds. Most of the content is still us and the other stakeholders. But the editing passes are mostly AI too now. Still, we very much need humans in the loop.
We don't use copilot as we're doing documentation, not code. We mostly use internal AIs that the company is building and then a vendor that supports workflow-style AI. So, like, iterative passes under the token limits for writing. These workflows do get pretty long, like 100+ steps, just to get to boilerplate.
We're easily 100x more efficient. Four of us can get a document done in a week that took the whole team years to do before.
The effort is more concentrated now. I can shepherd a document to near final review with a meeting or two from the specialist engineers, that used to take many meetings with much of both teams. We were actually able to keep up and not fall behind for about 3 months. But, management see us as a big pointless cost center of silly legal compliance, so we're permanently doomed to never get to caught up. Whatever, still have a job for now.
I guess my questions back are:
- How do you think AI is going to change the other parts of your company than coding/engineering?
- Have you seen other non engineering roles be changed due to AI?
- What do your SOs/family think of AI in their lives and work?
- How fast do you think we're getting to the 'scary' phase of AI? 2 years? 20 years? 200 years?
[0] I try to keep this account anonymous as possible, so no, I'm not sharing the company.
by nusl on 4/29/25, 12:18 PM
I'm personally only more productive with the help of AI if one the following conditions are met;
1. It's something I was going to type anyway, but I can just press Tab and/or make a minor edit
2. The code produced doesn't require many changes or time in understanding, as the times where it has required many changes or deeper understanding probably would have been faster to just code myself
Where it has been helpful, though, is debugging errors or replacing search engines for helping out with docs or syntax. But, sometimes it produces bullsh*t that doesn't exist and this can lead you down a rabbithole to nowhere.
More than once it's suggested something to me that solved all of the things I needed, only to realise none of it existed.
by biggestdoofus on 4/29/25, 1:26 PM
The people writing boring crud apps should be scared (but I think it's a failure in our industry that this is still a thing).
The technical debt that will be amassed by AI coding is worrying however. Coworkers here routinely try to merge inn stuff that is just absolute slop, and now I even have to argue with them on the basis that they think it's right because the AI wrote it...
by anonzzzies on 4/29/25, 12:24 PM
by ojr on 4/29/25, 1:09 PM
In a sprint planning scenario, I think tasks that were 1,2,3,5,8,13, etc., get put down a notch, nothing more, with the invention of AI. AIs have not made an 8-point task into a 3-point one at all. There is a 50/50 chance that an old 8-point task before AI remains 8 points, with it sometimes dropping to 5.
by anant90 on 4/29/25, 1:09 PM
by vvojd on 5/1/25, 12:24 AM
On the first, you really have to consider a number of options when refactoring or adding to a codebase. On the latter, you may be able to get away with having an extremely detailed manual but ultimately a lot of day to day things aren’t suitable to a RAG.
So no, no one’s getting fired anytime soon.
by fhd2 on 4/29/25, 1:02 PM
But I think the central question is not how much of software development can be automated. It's rather how many engineers companies _believe_ they need.
Having spent some time in mid sized companies adjacent to large companies, the sheer size of teams working on relatively simple stuff can be stunning. I think companies with a lot of money have overstaffed on engineers for at least a decade now. And the thing is: It kinda works. An individual or a small team can only go so far, a good team can only grow at a certain rate. If you throw hundreds of engineers at something, they _will_ figure it out, even if you could theoretically do it with far less, by optimising for quality hires and effective ways of working. That's difficult and takes time, so if you have the money for it, you can throw more bodies at it instead. You won't get it done cheaper, probably also not better, but most likely faster.
The mere _idea_ that LLMs can replace human engineers kinda resets this. The base expectation is now that you can do stuff with a fraction of the work force. And the thing is: You can, you always could, before LLMs. I've been preaching this for probably 20 years now. It's just that few companies dared to attempt it, investors would scoff at it, think you're being too timid. Now they celebrate it.
So like many, I think any claims of replacing developers with AI are likely cost savings in disguise, presented in a way the stock market might accept more than "it's not going so well, we're reducing investments".
All that aside, I also find it difficult as a layperson to separate the advent of coding LLMs from other, probably more consequential effects, like economic uncertainty. When the economy is stable, companies invest. When it's unstable, they wait.
by rcarmo on 4/30/25, 6:55 AM
by orwin on 4/29/25, 12:59 PM
As the complexity grow, the usefulness of AI agents decrease: a lot, and quite fast.
In particular, integration of microservices are a really hard case to crack for any AI agent as it often mix training data with context data.
It is more useful in centralised apps, and especially for front dev, as long as you don't use finite state machines. I don't understand why, even Claude/Cursor trip on otherwise really easy code (and btw if you don't use state machines for your complex front end code, you're doing it wrong).
As long as you know what your agent is shitty at however, using AI is a net benefit as you don't loose time trying to communicate your needs and just do it, so it is only gains and no loses.
by babyent on 4/29/25, 2:13 PM
by jakeoverflow on 4/30/25, 8:21 PM
by teeray on 4/29/25, 12:40 PM
by kypro on 4/29/25, 12:29 PM