by nemofoo on 1/27/25, 9:37 PM with 202 comments
by potsandpans on 1/27/25, 10:05 PM
I think the key to being successful here is to realize that you're still at the wheel as an engineer. The llm is there to rapidly synthesize the universe of information.
You still need to 1) have solid fundamentals in order to have an intuition against that synthesis, and 2) be experienced enough to translate that synthesis into actionable outcomes.
If youre lacking in either, youre at the same whims of copypasta that have always existed.
by KronisLV on 1/27/25, 10:16 PM
The very moment when you try to go off the beaten path and do something unconventional or stuff that most people won't have written a lot about, it gets more tricky. Just consider how many people will know how to configure some middleware in a Node.js project... vs most things related to hardware or low level work. Or even working with complex legacy codebases that have bits of code with obscure ways of interacting and more levels of abstraction that can be reasonably put in context.
Then again, if an LLM gets confused, then a person might as well. So, personally I try to write code that'd be understandable by juniors and LLMs alike.
by pieix on 1/27/25, 10:00 PM
This is a good take that tracks with my (heavy) usage of LLMs for coding. Leveraging productive-but-often-misguided junior devs is a skill every dev should actively cultivate!
by tashian on 1/28/25, 1:08 AM
Six months ago, I tried building this app with ChatGPT and got nowhere fast.
Building it with Claude required a gluing together a few things that I didn't know much about: JavaScript audio processing, drawing on a JavaScript canvas, an algorithm for bilinear interpolation.
I don't write JavaScript often. But I know how to program and I understand what I'm looking at. The project came together easily and the creative momentum of it felt great to me. The most amazing moment was when I reported a bug—I told Claude that the audio was stuttering whenever I moved the controls—and it figured out that we needed to use an AudioWorklet thread instead of trying to play the audio directly from the React component. I had never even heard of AudioWorklet. Claude refactored my code to use the AudioWorklet, and the stutter disappeared.
I wouldn't have built this without Claude, because I didn't need it to exist that badly. Claude reduced the creative inertia just enough for me to get it done.
by zitterbewegung on 1/27/25, 9:59 PM
I do see the LLMs ingesting more and more documentation and content and they are improving at giving me right answers. Almost two years ago I don't believe they had every python package indexed and now they appear to have at least the documentation or source code of it.
by powerset on 1/27/25, 10:11 PM
by ravroid on 1/27/25, 10:49 PM
I use an NPM script to automate concatenating the spec + source files + prompt, which I then copy/paste to o1. So far this has been working somewhat reliably for the early stages of a project but has diminishing returns.
by tanseydavid on 1/27/25, 9:55 PM
by transcriptase on 1/27/25, 10:46 PM
I can also tell when it’s stuck in some kind of context swamp and won’t be any more help, because it will just keep making the same stupid mistakes over and over and generally forgetting past instructions.
At that point I take the last working code and paste it into a new chat.
by superq on 1/28/25, 3:02 AM
I have a custom prompt that instructs gpt4o to get aggressive about attacking anything I say (and, importantly, anything it says).
Here's my result for the same question:
https://chatgpt.com/share/67984aa9-1608-8012-be93-a77728ab8e...
by thot_experiment on 1/27/25, 10:48 PM
I've been doing that since way before LLMs were a thing.
by medhir on 1/28/25, 2:03 AM
Like many others are saying, you need to be in the drivers seat and in control. The LLM is not going to fully complete your objectives for you, but it will speed you up when provided with enough context, especially on mundane boilerplate tasks.
I think the key to LLMs being useful is knowing how to prompt with enough context to get a useful output, and knowing what context is not important so the output doesn’t lead you in the wrong direction.
by nsavage on 1/27/25, 10:07 PM
by qiqitori on 1/28/25, 1:46 AM
USB-CDC is cooler than that, you can make the Pico identify as more than just one device. E.g. https://github.com/Noltari/pico-uart-bridge identifies as two devices (so you get /dev/ttyACM0 and /dev/ttyACM1). So you could have logs on one and image transfers on another. I don't think you're limited to just two, but I haven't looked into it too far.
You can of course also use other USB protocols. For example you could have the Pico present itself as a mass-storage device or a USB camera, etc. You're just limited by the relatively slow speed of USB1.1. (Though the Pico doesn't exactly have a lot of memory so even USB1.1 will saturate all your RAM in less than 1 second)
by Fourier864 on 1/28/25, 12:03 AM
by cudgy on 1/28/25, 3:37 AM
Made me wanna join in your garage and help out with the project :)
by turnsout on 1/28/25, 12:19 AM
Cursor & Claude got the boilerplate set up, which was half the mental barrier. Then they acted as thought partners as I tried out various implementations. In the end, I came up with the algorithm to make the thing performant, and now I'm hand-coding all the shader code—but they helped me think through what needed to be done.
My take is: LLMs are best at helping you code at the edge of your capabilities, where you still have enough knowledge to know when they're going wrong. But they'll help you push that edge forward.
by cmdtab on 1/27/25, 10:20 PM
I asked claude to write the initial version. It came up with a complicated class based solution. I spent more than 30 minutes getting a good abstract to come out. I was copy pasting typescript errors and applying fixes it suggested without thinking much.
In the end, I gave up and wrote what I wanted myself in 5 minutes.
0] https://github.com/cloudycotton/browser-operator/blob/main/s...
by righthand on 1/28/25, 4:21 AM
by lbotos on 1/28/25, 12:23 AM
If .com, email me (it's in my profile) and I can see if there is a reason your account is getting so heavily captcha'd.
by insane_dreamer on 1/27/25, 11:28 PM
A bit on a tangent, but has there been any discussion of how junior devs in the future are ever going to get past that stage and become senior dev calibre if companies can replace the junior devs with AIs? Or is the thinking we'll be fine until all the current senior devs die off and by then AI will be able to replace them too so we won't need anyone?
1. CS/Eng degree 2. ??? 3. Senior dev!
by tippytippytango on 1/28/25, 12:02 AM
by dstainer on 1/27/25, 11:31 PM
by theodric on 1/28/25, 12:09 AM
Most demeaning and depressingly toxic thing I've read today...
by Havoc on 1/28/25, 1:14 AM
I’ve come to a similar conclusion - for now at least it’s best applied at a fairly granular level. Make me a red brick wall there rather than „hey architect make me a house“.
I do think OP tried a bit too much new stuff in one go though. USB plus zig is quite a bit more ambitious than the traditional hello world in a new lang
by 65 on 1/28/25, 12:15 AM
by BigParm on 1/27/25, 9:55 PM
by sarob on 1/28/25, 4:43 PM
by anigbrowl on 1/27/25, 9:59 PM
by lxe on 1/27/25, 11:26 PM
by Graziano_M on 1/28/25, 2:06 AM
by rekabis on 1/28/25, 1:19 AM
Imma gonna have to work this into a convo some day. Just to see the “wait, what??” expressions on people’s faces.
by petarb on 1/28/25, 5:45 AM
I love it!
by protocolture on 1/28/25, 12:01 AM
by addaon on 1/27/25, 9:59 PM
by adsharma on 1/28/25, 1:00 AM
* Get AI to write tests
* Use copy/paste. No IDE
* Use python (not because it's better than zig)
by j45 on 1/28/25, 4:59 AM
Learning how to build or create with a new kind of word processor is a skill unto itself.
by mordymoop on 1/27/25, 10:51 PM
We get it. They’re not superintelligent at everything yet. They couldn’t infer what you must’ve really meant in your heart from your initial unskillful prompt. They couldn’t foresee every possible bug and edge case from the first moment of conceptualizing the design, a flaw which I’m sure you don’t have.
The thing that pushes me over the line into ranting territory is that computer programmers, of all people, should know that computers do what you tell them to.
by anaisbetts on 1/27/25, 10:20 PM
by ffitch on 1/28/25, 2:08 AM
by djray on 1/28/25, 3:44 PM
"I learned that I need to stay firmly in the driver’s seat when tackling new tech."
Er, that's pretty much what a pilot is supposed to do! You can't (as yet) just give an AI free reign over your codebase and expect to come back later that day to discover a fully finished implementation. Maybe unless your prompt was "Make a snake game in Python". A pilot would be supervising their co-pilot at all times.
Comparing AIs to junior devs is getting tiresome. AIs like Claude and newer versions of ChatGPT have incredible knowledge bases. Yes, they do slip up, especially with esoteric matters where there are few authoritative (or several conflicting) sources, but the breadth of knowledge in and of itself is very valuable. As an anecdote, neither Claude nor ChatGPT were able to accurately answer a question I had about file operation flags yesterday, but when I said to ChatGPT that its answer wasn't correct, it apologised and said the Raymond Chen article it had sourced wasn't super clear about the particular combination I'd asked about. That's like having your own research assistant, not a headstrong overconfident junior dev. Yes, they make mistakes, but at least now they'll admit to them. This is a long way from a year or two ago.
In conclusion: don't use an AI as one of your primary sources of information for technology you're new to, especially if you're not double-checking its answers like a good pilot.
by DigitalSea on 1/27/25, 11:51 PM
LLMs are jittery apprentices. They'll hallucinate measurements, over-sand perfectly good code, or spin you in circles for hours. I’ve been there back in the GPT-4 days especially, nothing stings like realising you wasted a day debugging AI’s creative solution to a problem you could've solved in 20 minutes.
When you treat AI like a toolbelt, not a replacement for your own brain? Magic. It’s killer at grunt work like; explaining regex, scaffolding boilerplate, or untangling JWT auth spaghetti. You still gotta hold the blueprint. AI ain't some magic wand: it’s a nail gun. Point it wrong, and you’ll spend four days prying out mistakes.
Sucks it cost you time, but hey, now you know to never let the tool work you. It's hopefully a lesson OP learns once and doesn't let it sour their experience with AI, because when utilised properly, you can really get things done, even if it's just the tedious/boring stuff or things you'd spend time Google bashing, reading docs or finding on StackOverflow.
by rglover on 1/27/25, 11:09 PM
For anything remotely complex, this is dead on. I use various models daily to help with coding, and more often than not, I have to just DIY it or start brand new chats (because the original context got overwhelmed and started hallucinating).
This is why it's incredibly frustrating to see VCs and AI founders straight-up gaslighting people about what this stuff can (or will) do. They're trying to push this as a "work killer," but really, it's going to be some version of the opposite: a mess creator that necessitates human intervention.
Where we're at is amazing, but we've got a loooong way to go before we can be on hover crafts sipping sodas Wall-E style.
by groby_b on 1/27/25, 11:59 PM
No design. Hardware & software. 2 different platforms. A new language. Zig. Unrealistic time expectations.
A senior SWE would've still tanked this, just in different ways.
Personally, I'd still consider it a valuable experiment, because the lessons learned are really valuable ones. Enjoy round 2 :)
by stuaxo on 1/27/25, 10:00 PM
by tacoooooooo on 1/27/25, 10:05 PM
Experienced folks aren't surprised by this. LLMs are fast for boilerplate, research, and exploring ideas, but they're not autonomous coders. The key is you staying in charge: detailed prompts, critical code review, iterative refinement. Going back to web interfaces and manual pasting because editor integration felt "too easy" is a massive overcorrection. It's like ditching cars for walking after one fender bender.
Ultimately, this wasn't an AI failure, it was an inexperienced user expecting too much, too fast. The "lessons learned" are valid, but not AI-specific. For those who use LLMs effectively, they're force multipliers, not replacements. Don't blame the tool for user error. Learn to drive it properly.
by neilv on 1/28/25, 12:10 AM
A junior dev faking competence while plagiarizing like crazy.
The plagiarizing part is why the junior dev from hell might not get fired: laundering open source copyrights can have beancounter alignment.