by ghuntley on 6/4/25, 2:40 PM with 90 comments
by makmanalp on 6/4/25, 3:16 PM
For example, managing the context window has become less of a problem with increased context windows in newer models and tools like the auto-resummarization / context window refresh in claude code make it so that you might be just fine without doing anything yourself.
All this to say that the idea that you're left significantly behind if you aren't training yourself on this feels bogus (I say this as a person who /does/ use these tools daily). It should take any programmer not more than a few hours to learn these skills from scratch, with the help of a doc, meaning any employee you hire should be able to pick these up no problem. I'm not sure it makes sense as a hiring filter. Perhaps in the future this will change. But right now these tools are built more like user friendly appliances - more like a cellphone or a toaster than a technology to wrap your head around, like a compiler or a database.
by jmsdnns on 6/4/25, 3:21 PM
If we take "operator skill" to mean "they know how to make prompts", there is some truth to it and we can see that by whether or not the operator is designing the context window or not.
But for the more important question, whether or not LLMs are useful has an inverse relationship with how skilled the person is in the context they are using for. This is why the best engineers mostly shrug at LLMs while those that arent the best feel a big lift.
So, LLMs are not mirrors of operator skill. This post is instead an argument that everyone should become prompt engineers.
by foldr on 6/4/25, 3:09 PM
by satisfice on 6/5/25, 3:44 AM
If you think you know “the best LLM to use for summarizing” then you must have done a whole lot of expensive testing— but you didn’t, did you? At best you saw a comment on HN and you believed it.
And if you did do such testing, I hope it wasn’t more than a month ago, because it’s out of date, now.
The nature of my job affords me the luxury of playing with AI tech to do a lot of things, including helping me write code. But I’m not able to answer detailed technical questions about which LLM is best for what. There is no reliable and durable data. The situation itself is changing too quickly to track unless you have the resources to have full subscriptions to everything and you don’t have any real work to do.
by namuol on 6/4/25, 3:09 PM
Um, what? This is the sort of knowledge you definitely do not need in your back pocket. It’s literally the perfect kind of question for AI to answer. Also this is such a moving target that I suspect most hiring processes change at a slower pace.
by mromanuk on 6/4/25, 3:08 PM
The LLM tells me that they prefere the "older way" because it's more broadly compatible, that's ok if you are aiming for that. But If the programmer doesn't know about that they will be stuck with the LLM calling the shots for them.
by henning on 6/4/25, 3:19 PM
MCP is a tool and may only be minor in terms of relevance to a position. If I use tools that use MCP but our actual business is about something else, the interview should be about what the company actually does.
Your arrogance and presumptions about the future don't make you look smart when you are so likely to be wrong. Enumerating enough predictions until one of them is right is not prescient, it's bullshit.
by patrickhogan1 on 6/4/25, 3:19 PM
That said, I think there’s value in a catch-all fallback: running a prompt without all the usual rules or assumptions.
Sometimes, a simple prompt on the latest model just works—and often in surprisingly more effective ways than one with a complex prompt.
by ninetyninenine on 6/4/25, 2:59 PM
Realistically though I think AI will come to a point where it can take over my job. But if not, this is my hope.
by brcmthrowaway on 6/4/25, 3:08 PM
by mouse_ on 6/4/25, 3:03 PM
by incomingpain on 6/5/25, 11:36 AM
My 2 year old graphics card sure could be bigger... kicks can... it was plenty for starfield...
holy crap vram is expensive!
>If they waste time by not using the debugger, not adding debug log statements, or failing to write tests, then they're not a good fit.
Why would you deprive yourself of a tool that will make you better?
by bgwalter on 6/4/25, 3:08 PM
With that arrogance, my only question is: Where is your own code and what makes you more qualified than Linux kernel or gcc developers?
by jbellis on 6/4/25, 3:03 PM
I started Brokk to give humans better tools with which to do this: https://github.com/BrokkAi/brokk
by elliotbnvl on 6/4/25, 3:08 PM
Do you mean writing manual tests? Because having the LLM write tests is key in iteration speed w/o backtracking.