by yubozhao on 10/24/24, 9:46 PM with 6 comments
by gregjor on 10/25/24, 1:04 AM
That may not seem obvious because programmers often do both jobs, and frequently in a haphazard or incremental fashion, discovering requirements along the way as they iterate on versions of code. We do that because working software very often represents the most complete and unambiguous representation of requirements — we don’t have a better language for describing software. Prompts in English won’t make that better.
Thinking that the best use of LLMs comes from writing more code seems to lack imagination. Why not use AI to solve the business requirements, rather than having it write code? Instead of “write the code for a metrics dashboard” why can’t we just ask the LLM to show us the dashboard? Because LLMs can’t reason or easily integrate real-time raw data into their model. All they can do now amounts to mimicking a human programmer by looking up similar tidbits of code from the training data.
by Terr_ on 10/24/24, 10:34 PM
I disagree, the history of software engineering is a constant series of "assembly line moments", because we're always making new machines to make the old machines. Compilers, macros, garbage-collecting memory management, and libraries upon libraries.
There's nothing new about seeking to automate oneself out of a job.