by gfysfm on 2/15/25, 7:44 PM with 79 comments
by PessimalDecimal on 2/15/25, 10:26 PM
by serial_dev on 2/15/25, 10:09 PM
Just like you don't need to be able to outrun the bear, it's enough to outrun your friend, it's okay to be theoretically replaceable by AI, as long as I'm not the most obvious person to be replaced (at least that's what I tell myself).
Companies move slowly, I just hope they move slowly enough for me to provide a good life for my family as a software developer.
This is a motivating and least depressive outlook on the future, as it encourages me to learn things better, and that feels good for me.
I have many thought on AI, sometimes excited, sometimes frustrated, sometimes worried, but I didn't see this idea phrased like this before, so thought Id share it.
by satisfice on 2/15/25, 10:26 PM
It will probably not ask you any questions, but if it does and you answer it will not ask follow up questions, or if it does it will lose track of your answers or non-answers. It does not maintain situational awareness. It does not speculate on your state of mind or competence as you help it.
by streptomycin on 2/15/25, 10:30 PM
by cadamsdotcom on 2/15/25, 9:59 PM
You can avoid your job being eaten by AI by moving toward roles where you talk to people, understand their problems, and perform the work of translating that into solutions.
There will also always be an orchestration role no matter how much automation is thrown at a problem. - Who debugs that AI code? - OK, say it’s an AI. Now who fixes the auto-debugger when it breaks? - Who makes decisions about rebuilds and migrations and big platform shifts? - Sure maybe that decision is informed by advice from AIs but someone with accountability has to make the call before the wheels are put in motion for the rebuild or migration or whatever.
Both are durable targets for your career.
by 0x20cowboy on 2/15/25, 10:40 PM
VCs would probably like that as well.
Maybe pitch that to CEOs as a cost saving meausre.
by btrettel on 2/15/25, 10:24 PM
I'm a theory and simulation guy, but in retrospect I should have done far more experiments when I was in training. I guess it's never too late to start...
by irrational on 2/15/25, 10:21 PM
by kubb on 2/15/25, 10:24 PM
by kabes on 2/15/25, 10:52 PM
by arijo on 2/15/25, 10:40 PM
* Problems are ill-defined and poorly-scoped
* Solutions are difficult to verify
* The total volume of code involved is massive
In my view, this is describing legacy code: feature work in large established codebases."
If you have used cursor.ai to try to create a moderately sized project you'll see this happen even with newly generated code.
In my experience, if you limit yourself to generate not well thought through prompts and do not work on getting a deep understanding of the generated codebase, the LLM will start duplicating the same code flows in different ways, many time forgetting some of the behaviour already implemented.
Kind of like having dozens of developers working on the same codebase clueless about what each other has done and re-implementing the same functionality until the code turns into a pile of spaghetti code.
It can be done but:
* You must have a deep understanding of the code
* You need to think hard about what you are doing and give very detailed instructions to the AI
It works for trying a quick prototype but when moving on to production grade code you need to slow down and "program" step by step providing precise instructions as you go.
You'll have to design the changes to the minor detail and then you can let the AI do the grunt work.
It's like programming without coding.
by __MatrixMan__ on 2/15/25, 10:47 PM
Most of us are already accountable for outcomes, not outputs. Perhaps we could go further in that direction--but doing so only makes personal sense if the underlying work that you're doing is important to you. If AI is about to make us all 10x coders, why should we keep the jobs we have when we could take that extra capability and go do something more meaningful--the kind of something that used to require a 10-person company.
I'm personally pretty happy with my company, but my point is that once everybody gets more productive, what's the likelihood that everybody who still has a job after the transition still wants that job now that doors which were previously closed are now open?
It's gonna be a bigger reshuffle than just taking more ownership over our existing domains.
by bitwize on 2/15/25, 10:01 PM
by globalise83 on 2/15/25, 10:30 PM
This truly is the challenge - both to have the huge context window and the ability to conduct coherent and comprehensive reasoning using the entire context. We should see soon whether there is a Moore's law effect here: I would be immensely surprised if not.
by trod1234 on 2/15/25, 10:06 PM
Upon reading this, it seems like the author is in the latter group, and while he offers a few points about what computers can and can't do, the advice given is horrible advice because it takes things in isolation and overgeneralizes, while not paying attention to underlying factors.
The "lets just tough it out" approach and specialize in old code, or learning to do what AI can't are impossible tasks in practice.
If the author is in the latter group, I think he's unintentionally doing himself a disservice by showing a low level of competency in addressing the problems.
You don't want to hire an engineer who is blind to the potential liabilities they create.
Any engineers in IT are intimately familiar with the fallout from failures involving sequential steps in a pipeline.
There's front-of-line blocking (FOLB), and there's single points of failure (SPOFs), these are considered in resiliency design or documentation of the failure domains. The most important parts of which are used in identifying liabilities upfront before they happen.
Entry level task positions are easily automated by AI. So companies replace the workers, with AI.
How do you get to be a mid-level engineer when the entry level no longer exists...its all based upon years of experience. Experience which can no longer be gotten.
Does this sound like a pipeline yet?
You still have mid-level engineers available, as you do senior engineers, but no new ones are entering the marketplace. Aging removes these people over time, and as that sieves towards 0 the cost of hiring these people goes up until it reaches infinite (where no one can be hired).
What goes into the pipeline is typically the same but most often less than what comes out of said pipeline. In talent development its a sieve separating the wheat from the chaff.
Only the entry point is clogged, and nothing new is going in, humans deal with future expectations and the volume going into such pipelines is adaptive. No future, no one goes into such professions.
After a certain point, you can't find talent. There's no economic incentive because companies made it this way by collusion.
Things stop getting done which forces collapse of the company. Its not just one company because this is a broad problem, so this happens across the board creating a inflationary cycle of cost, followed by a correlated deflationary cycle in talent, that cannot be fixed except by the industry as a whole removing the blockage. They can't do that though because of short-term competition.
When have industry business-people today turned on a dime in economically challenging situations where the money wasn't available; ever.
Debt financing makes it so these people don't need to examine these trends more than a year out, but the consequences of these trends can occur just outside that horizon, and once integrated the bridges have been burnt and there is no going back while also maintaining marketshare.
All of the incentives force business people to drive everything into the ground in these type of cycles. The only solution, is to know ahead of time, and not bait the hook. The business people of today have shown that this is beyond them, its all about short-term profits at the limits of growth, business as usual.
Real world consequences of such, you can look to Thomas Malthus, and Catton who revisits Malthus.
Catton importantly shows how extraction of non-renewables can reduce or destroy previous existing renewable flows leading to lower population limits as a whole than prior to before prior to overshoot.
Similar behavior applies broadly to destructive phase changes of super critical systems with complex feedback mechanisms (i.e. negative flips to positive and runs away, or vice versa leading to collapse/halt). In other words where you have two narrow boundaries outside which the systems fail.
by firesteelrain on 2/15/25, 10:27 PM