by sails on 5/6/25, 1:54 PM with 12 comments
by klabb3 on 5/9/25, 10:55 AM
It’s faster (yes) than prototype-then-fixup. Why? Because the ”live refactor” is harder than the greenfield writing phase. The new knowledge often makes the impl straightforward.
It’s also better quality than design-then-build. The optimal architecture and modularization change with knowledge increase, which is best to get via experience. You can design fully upfront but it’s riddled with analysis paralysis - it’s notoriously hard (and slow) to predict unknowns.
Sounds like good advice? Well, the hardest part isn’t to follow it – it’s to know upfront what size of task it is. If it turns out to be easier, you waste a bit of work (prototype-fixup is faster). However, if it’s bigger than you thought – you’re in the best possible position to break down the new problem into subtasks, with no wasted work.
by RedShift1 on 5/9/25, 6:17 AM
by athrowaway3z on 5/9/25, 8:45 AM
A quick grep every blue moon can be faster than wrangling a LLM into place, and as an added bonus you can look back and laugh at how big of an idiot you were.
by 1dom on 5/9/25, 8:31 AM
The whole value proposition of the digital world is that we can store and manipulate it for virtually nothing: there isn't the same cost to having digital stuff, and so there isn't the same gains from throwing it away IMO.
by gherkinnn on 5/9/25, 7:43 AM
What you've learned along the way is so much more important.
by cadamsdotcom on 5/11/25, 5:04 AM
Code written to learn and explore a problem space? Sure.
Code written in response to a prompt, which could easily be rewritten - things like a throwaway “please tell me a story about the contents of this CSV for me and also write code to graph it”. Yep throw it away.
Or keep it as an example for a later model.
That’s very different to code written to high standards intended for others’ use.
We need different words for all of those 3 varieties of code.
by mehulashah on 5/9/25, 7:14 AM
by gitroom on 5/9/25, 8:06 AM