from Hacker News

LLMs are mirrors of operator skill

by ghuntley on 6/4/25, 2:40 PM with 90 comments

  • by makmanalp on 6/4/25, 3:16 PM

    Counterthoughts: a) These skills fit on a double sided sheet of paper (e.g. the claude code best practices doc) and b) what these skills are has been changing so rapidly that even the best practices docs fall out of date super quick.

    For example, managing the context window has become less of a problem with increased context windows in newer models and tools like the auto-resummarization / context window refresh in claude code make it so that you might be just fine without doing anything yourself.

    All this to say that the idea that you're left significantly behind if you aren't training yourself on this feels bogus (I say this as a person who /does/ use these tools daily). It should take any programmer not more than a few hours to learn these skills from scratch, with the help of a doc, meaning any employee you hire should be able to pick these up no problem. I'm not sure it makes sense as a hiring filter. Perhaps in the future this will change. But right now these tools are built more like user friendly appliances - more like a cellphone or a toaster than a technology to wrap your head around, like a compiler or a database.

  • by jmsdnns on 6/4/25, 3:21 PM

    A key thing that has been shown in research at Wharton is that LLMs elevate people with less experience a lot more than they elevate people a lots of experience.

    If we take "operator skill" to mean "they know how to make prompts", there is some truth to it and we can see that by whether or not the operator is designing the context window or not.

    But for the more important question, whether or not LLMs are useful has an inverse relationship with how skilled the person is in the context they are using for. This is why the best engineers mostly shrug at LLMs while those that arent the best feel a big lift.

    So, LLMs are not mirrors of operator skill. This post is instead an argument that everyone should become prompt engineers.

  • by foldr on 6/4/25, 3:09 PM

    Isn’t this just a roundabout way of saying that people who are skilled at using LLMs will get better results from them than people who aren’t? In other words, LLMs “mirror operator skill” in about the same way as hammers, paintbrushes or any other tool. Hardly a blinding insight.
  • by satisfice on 6/5/25, 3:44 AM

    A lot of these questions no one knows the answer to.

    If you think you know “the best LLM to use for summarizing” then you must have done a whole lot of expensive testing— but you didn’t, did you? At best you saw a comment on HN and you believed it.

    And if you did do such testing, I hope it wasn’t more than a month ago, because it’s out of date, now.

    The nature of my job affords me the luxury of playing with AI tech to do a lot of things, including helping me write code. But I’m not able to answer detailed technical questions about which LLM is best for what. There is no reliable and durable data. The situation itself is changing too quickly to track unless you have the resources to have full subscriptions to everything and you don’t have any real work to do.

  • by namuol on 6/4/25, 3:09 PM

    > If I were interviewing a candidate now, the first things I'd ask them to explain would be the fundamentals of how the Model Context Protocol works and how to build an agent.

    Um, what? This is the sort of knowledge you definitely do not need in your back pocket. It’s literally the perfect kind of question for AI to answer. Also this is such a moving target that I suspect most hiring processes change at a slower pace.

  • by mromanuk on 6/4/25, 3:08 PM

    Every time I ask an LLM to write some UI and model for SwiftUI I have to specify to use @Observable macro (is the new way), which they normally do, after asking for it.

    The LLM tells me that they prefere the "older way" because it's more broadly compatible, that's ok if you are aiming for that. But If the programmer doesn't know about that they will be stuck with the LLM calling the shots for them.

  • by henning on 6/4/25, 3:19 PM

    Centering an interview around MCP and "building an agent" which often means "write a prompt and make HTTP calls to some LLM host service" is incredibly stupid unless that is what the product the company produces is.

    MCP is a tool and may only be minor in terms of relevance to a position. If I use tools that use MCP but our actual business is about something else, the interview should be about what the company actually does.

    Your arrogance and presumptions about the future don't make you look smart when you are so likely to be wrong. Enumerating enough predictions until one of them is right is not prescient, it's bullshit.

  • by patrickhogan1 on 6/4/25, 3:19 PM

    I agree with most of what you’re saying—especially the Unix pipe analogy and the value of building a prompt library while understanding which LLM to use.

    That said, I think there’s value in a catch-all fallback: running a prompt without all the usual rules or assumptions.

    Sometimes, a simple prompt on the latest model just works—and often in surprisingly more effective ways than one with a complex prompt.

  • by ninetyninenine on 6/4/25, 2:59 PM

    My hope is that the AI never improves to take over my job and the next generation of programmers is so used to learning programming with AI that they become mirrors of hallucinating AI. That eliminates ageism and AI taking my job.

    Realistically though I think AI will come to a point where it can take over my job. But if not, this is my hope.

  • by brcmthrowaway on 6/4/25, 3:08 PM

    Is there a good guide resource on properly using LLMs/agents for coding? How to get started?
  • by mouse_ on 6/4/25, 3:03 PM

    isn't the whole selling point of ML the idea that operators no longer need to be as skilled? it feels like the goal posts have been moving as of late.
  • by incomingpain on 6/5/25, 11:36 AM

    I have a greater appreciation of LLMs after working out how to do them offline.

    My 2 year old graphics card sure could be bigger... kicks can... it was plenty for starfield...

    holy crap vram is expensive!

    >If they waste time by not using the debugger, not adding debug log statements, or failing to write tests, then they're not a good fit.

    Why would you deprive yourself of a tool that will make you better?

  • by bgwalter on 6/4/25, 3:08 PM

    "Someone can be highly experienced as a software engineer in 2024, but that does not mean they're skilled as a software engineer in 2025, now that AI is here."

    With that arrogance, my only question is: Where is your own code and what makes you more qualified than Linux kernel or gcc developers?

  • by jbellis on 6/4/25, 3:03 PM

    Yes! This is an underappreciated point that both sides in "AI makes coding better" vs "AI just writes slop" have mostly missed. Co-authoring code with AI is a qualitatively different activity than writing code by hand, and it requires new skills.

    I started Brokk to give humans better tools with which to do this: https://github.com/BrokkAi/brokk

  • by elliotbnvl on 6/4/25, 3:08 PM

    > If they waste time by not using the debugger, adding debug log statements, or writing tests, then they're not a good fit.

    Do you mean writing manual tests? Because having the LLM write tests is key in iteration speed w/o backtracking.