by joe8756438 on 2/4/25, 4:23 PM with 30 comments
I'm curious about how the barrier to entry for creating software products has changed since the rapid proliferation of AI development tools.
by sevensor on 2/4/25, 6:08 PM
AI dev tools are that catamaran. They’ll get you across the pool; you might even get half a mile from shore, but there you are, in the middle of the lake, sitting on cardboard and duct tape, wishing you knew how to swim.
by boshalfoshal on 2/4/25, 5:53 PM
Those who already have a high level idea of what to do and roughly how to execute it benefit the most from LLMs at the moment. This is very good for purely "technical" devs in greenfield environments. Less useful for super large interconnected codebases, but tools are getting there.
It will not, however, make a bad dev a good one magically. A bad software product is not usually bottlenecked by the software its running on, its bottlenecked by user experience and pmf. That still requires some skilled human input, but that could also change soon. Some people have better product intuition than others but couldn't execute on complex code, so LLMs do help here to an extent.
As of 2025, I think you still need to be a pretty decent dev even with LLM assistance.
by joshstrange on 2/4/25, 6:58 PM
They are, in every case I've seen, creating software _demos_. Those things will fall over under their own weight with 1-2 more iterations.
Someone with no code experience can say "Make snake!" and for other contrived examples and maybe even add a handful of features but very quickly they will code themselves into a corner that they can't get out of. Heck, I sometimes go 3-4 prompts deep on something with Aider then git reset back once it turns out something isn't going to work out.
If some has _fully launched_ a product using only AI to write _all_ the code (Press X to doubt) then it's either a product that will never grow past its initial feature set and/or something trivially copied (with or without AI).
What AI tools may change is the ability for "ideas people" to create a basic MVP (Of the tool itself, I don't think you are going to get an LLM to churn out a whole SaaS codebase without a developer guiding) and raise interest/funding/recruit-others. That's not the "barrier to entry" lowering, that's just a "better slide deck".
by zlagen on 2/4/25, 6:23 PM
by Bjorkbat on 2/4/25, 8:18 PM
For software meant to be consumed by the masses it's too unreliable for the all the boring details, but otherwise if you want something that serves a specific purpose then sure, it seems to work really well.
Otherwise though I haven't really hard of any non-technical founders leveraging it to finally get their app off the ground.
by fxtentacle on 2/4/25, 8:44 PM
by bloomingkales on 2/5/25, 4:52 AM
It’s the most fun I’ve had in a long time, designing ad hoc algorithms that mimic how we store and retrieve memory (and form context).
It’s gotten to the point where the simulation theory is palatable to me. How the simulation pulls in relevant context just in time is critical to making a believable experience. You can cut a lot of corners so long as the felt experience is believable (eg, the LLM can simulate a scenario with you without all the data necessary to provide a believable experience - this is a memory/context constraint that we are newly being introduced to with LLMs).
Ok, off the deep end, I know. But listen, it would be programmers that would catch wind of it first.
by conception on 2/5/25, 12:33 AM
But the common folk will be able to create pretty compelling software as easily as they can create pretty compelling photos at an extremely lower barrier of skill as compared to the 70’s and using film.
So, like with photography the jobs will be fewer and require more work and a higher bar of skill but also, the common person will be able to produce something pretty compelling knowing little to nothing about how it all works.
by kanemcgrath on 2/4/25, 8:39 PM
by n0rdy on 2/4/25, 10:10 PM
I did a short experiment by trying to build an app with Cursor in the stack and domain I know nothing about. I got it to the first stage, and it was cool. But the app kept crashing once in a while with the memory issues, and my AI friend kept coming up with solutions that didn't help, but made the code more and more overengineered and harder to navigate. I'd feel sorry for those who'd need to maintain tools like this on stages like the one I described. Maybe that's the state of future start-ups out there?
by dvngnt_ on 2/4/25, 5:36 PM
I've had many friends with "app ideas" and using tools can help them flesh out their value proposition
by ingigauti on 2/4/25, 9:08 PM
I think we'll have a new programming language(natural language with rules), I'm biased though as I've made that language :)
Going lot lower
by duxup on 2/4/25, 4:31 PM
It's a dynamic I can't quite put a name on right now but I think that's a barrier.
by dehrmann on 2/4/25, 5:51 PM
by Havoc on 2/5/25, 12:48 PM
It’s very viable to glue together things if you have a vague concept of how it works. If you can’t read code at all you’ll be stuck the second you encounter a bug the AI can’t solve
Actually just built something with pandas this morning without having much knowledge about it. Just lots of print() iterations and AI till the dataframes do what I want
by austin-cheney on 2/5/25, 9:00 AM
by simonhfrost on 2/4/25, 5:24 PM
by lowlevel on 2/4/25, 11:53 PM
by sergiotapia on 2/4/25, 7:43 PM
by GoldenMonkey on 2/5/25, 4:54 AM
by 999900000999 on 2/4/25, 5:47 PM
Chat GPT and friends are really good at grunt work, but as far as picking the tech stack and architecting out an actual solution, it falls flat.
Plus if you ever run into any real trouble, chat GPT has a very nasty habit of just telling you to keep doing it the same way. I've had times where I'll post the same code in multiple LLMs and get multiple incorrect answers. All while I'm thinking if I was an actual web developer this would take 30 seconds...
by andrei_says_ on 2/4/25, 8:16 PM
It saves time in searching documentation but sometimes hallucinates.
The key here is that I can tell the difference and I have spent time in many codebases and read up on code design theory.
So in my case it’s a multiplier of clear understanding and somewhat sufficient subject matter expertise.
For someone without expertise, the LLM quickly becomes a multiplier of the Dunning-Kruger Effect.
I know enough to not try and write an organic chemistry paper with an LLM. But Twitter tells everyone they can do a similar thing in the area of software engineering.
by ddgflorida on 2/4/25, 10:04 PM
by anhsirksai on 2/9/25, 9:50 AM
by apwell23 on 2/4/25, 9:11 PM