by vinhnx on 6/4/25, 3:58 PM with 170 comments
by DebtDeflation on 6/4/25, 10:20 PM
- In Context Learning (providing examples, AKA one shot or few shot vs zero shot)
- Chain of Thought (telling it to think step by step)
- Structured output (telling it to produce output in a specified format like JSON)
Maybe you could add what this article calls Role Prompting to that. And RAG is its own thing where you're basically just having the model summarize the context you provide. But really everything else just boils down to tell it what you want to do in clear plain language.
by haolez on 6/4/25, 6:13 PM
My usage has converged to making very simple and minimalistic prompts and doing minor adjustments after a few iterations.
by bsoles on 6/5/25, 12:08 AM
This is even worse than "software engineering". The unfortunate thing is that there will probably be job postings for such things and people will call themselves prompt engineers for their extraordinary abilities for writing sentences.
by ColinEberhardt on 6/4/25, 6:07 PM
by orochimaaru on 6/4/25, 9:17 PM
Given input (…) and preconditions (…) write me spark code that gives me post conditions (…). If you can formally specify the input, preconditions and post conditions you usually get good working code.
1. Science of programming, David Gries 2. Verification of concurrent and sequential systems
by heisenburgzero on 6/5/25, 1:59 AM
I'll love to be wrong though. Please share if anyone has a different experience.
by yuvadam on 6/4/25, 9:01 PM
I get by just fine with pasting raw code or errors and asking plain questions, the models are smart enough to figure it out themselves.
by leshow on 6/4/25, 7:38 PM
by sherdil2022 on 6/4/25, 4:17 PM
by Kiyo-Lynn on 6/5/25, 1:48 AM
by jwr on 6/5/25, 2:02 AM
by akkad33 on 6/4/25, 5:50 PM
by groby_b on 6/4/25, 8:23 PM
Meanwhile, I just can't get over the cartoon implying that a React Dev is just a Junior Dev who lost their hoodie.
by m3kw9 on 6/4/25, 11:56 PM
by nexoft on 6/4/25, 10:08 PM
by jorge_cab on 6/5/25, 4:10 PM
Like what Agentic IDEs are starting to do. I don't copy paste code in the correct way to optimize my prompt, I select the code I want, with MCPs picking up you might not even have to paste input/output the Agent can run it and parse it into the LLM in an optimal way.
Of course, the quality of your instructions matter but I think that falls outside of "prompt engineering"
by neves on 6/4/25, 6:42 PM
by air7 on 6/5/25, 5:53 PM
-- https://www.theregister.com/2025/05/28/google_brin_suggests_...
by namanyayg on 6/4/25, 6:52 PM
by julienchastang on 6/5/25, 5:11 PM
by BuyMyBitcoins on 6/5/25, 10:02 PM
It is fascinating that you can give instructions as if you were talking to someone, but part of me feels like doing it this way is imprecise.
Nevertheless, having these tools process natural language for instructions is probably for the best. It does make these tools dramatically more accessible. That being said I still feel silly writing prompts as if I was talking to a person.
by GuB-42 on 6/5/25, 6:25 PM
But I am skeptical over the idea that asking a LLM to be an expert will actually improve its expertise. I did a short test prompting ChatGPT do be an "average developer, just smart enough not to get fired", an "expert" and no persona. I got 3 different answers but I couldn't decide which one was the best. The first persona turned out quite funny, with "meh" comments and an explanation about what the code "barely" does, but the code itself is fine.
I fear that but asking a LLM to be an expert, it will get the confidence of an expert rather than the skills of an experts, and a manipulative AI is something I'd rather not have.
by ofrzeta on 6/4/25, 6:44 PM
About the roles: Can you measure a difference in code quality between the "expert" and the "junior"?
by bbuchalter on 6/5/25, 4:02 PM
by MontagFTB on 6/5/25, 1:34 AM
by bongodongobob on 6/4/25, 10:35 PM
by almosthere on 6/5/25, 4:35 AM
by b0a04gl on 6/4/25, 7:31 PM
by mseepgood on 6/5/25, 2:13 PM
by max_on_hn on 6/5/25, 1:16 PM
> The key is to view the AI as a partner you can coach – progress over perfection on the first try
This is not how to use AI. You cannot scale the ladder of abstraction if you are babysitting a task at one rung.
If you feel that it’s not possible yet, that may be a sign that your test environment is immature. If it is possible to write acceptance tests for your project, then trying to manually coach the AI is just a cost optimization, you are simply reducing the tokens it takes the AI to get the answer. Whether that’s worth your time depends on the problem, but in general if you are manually coaching your AI you should stop and either:
1. Work on your pipeline for prompt generation. If you write down any relevant project context in a few docs, an AI will happily generate your prompts for you, including examples and nice formatting etc. Getting better at this will actually improve
2. Set up an end-to-end test command (unit/integration tests are fine too add later but less important than e2e)
These processes are how people use headless agents like CheepCode[0] to move faster. Generate prompts with AI and put them in a task management app like Linear, then CheepCode works on the ticket and makes a PR. No more watching a robot work, check the results at the end and only read the thoughts if you need to debug your prompt.
[0] the one I built - https://cheepcode.com
by vrnvu on 6/5/25, 6:25 AM
What a world we live in.
by Avalaxy on 6/4/25, 8:00 PM