by a7b3fa on 4/10/20, 2:52 PM with 60 comments
by Ididntdothis on 4/10/20, 8:38 PM
On the other hand maybe this forgiveness allowed us to build complex systems.
by smitty1e on 4/10/20, 4:48 PM
Recalls Gall's Law[1]. "A complex system that works is invariably found to have evolved from a simple system that worked."
Also, TFA invites a question: if handed a big ball of mud, is it riskier to start from scratch and go for something more triumphant, or try to evolve the mud gradually?
I favor the former, but am quite often wrong.
by mannykannot on 4/10/20, 4:51 PM
by carapace on 4/10/20, 5:26 PM
http://pespmc1.vub.ac.be/ASHBBOOK.html
> ... still the only real textbook on cybernetics (and, one might add, system theory). It explains the basic principles with concrete examples, elementary mathematics and exercises for the reader. It does not require any mathematics beyond the basic high school level. Although simple, the book formulates principles at a high level of abstraction.
by xyzzy2020 on 4/10/20, 6:19 PM
His defining characteristic is where you can permanently work around a bug (not know it, but know _of_ it) vs find it, know it, fix it.
Very interesting.
by jborichevskiy on 4/10/20, 6:04 PM
...
> These failures are, individually, mostly comprehensible! You can figure out which browser the report comes from, triage which extensions might be implicated, understand the interactions and identify the failure and a specific workaround. Much of the time.
> However, doing that work is, in most cases, just a colossal waste of effort; you’ll often see any individual error once or twice, and by the time you track it down and understand it, you’ll see three new ones from users in different weird predicaments. The ecosystem is just too heterogenous and fast-changing for deep understanding of individual issues to be worth it as a primary strategy.
Sadly far too accurate.
by naringas on 4/11/20, 1:09 AM
But I agree when he says, it has become impractical to do so. But I just don't like it personally, I got into computing because it was supposed to be the most explainable thing of all (until I worked with the cloud and it wasn't).
I highly doubt that the original engineers who designed the first microchips and wrote the first compilers, etc... relied on 'empirical' tests to understand their systems.
Yet, he is absolutely correct, it can no longer be understood, and when I wonder why I think the economic incentives of the industry might be one of the reasons?
for example, the fact that chasing crashes down the rabbit hole is "always a slow and inconsistent process" will make any managerial decision maker feel rather uneasy. This make sense.
Imagine if the first microprocessors where made by incrementally and empirically throwing together different logic gates until it just sort of worked??
by woodandsteel on 4/10/20, 9:42 PM
by lucas_membrane on 4/11/20, 8:43 AM
by natmaka on 4/11/20, 3:34 AM
by INTPnerd on 4/11/20, 3:20 AM
For example, in PHP I often find myself wondering if perhaps a class I am looking at might have subclasses that inherit from it. Since this is PHP and we have a certain amount of technical debt in the code, I cannot 100% rely on a tool to give me the answer. Instead I have to manually search through the code for subclasses and the like. If after such a search I am reasonably sure nothing is extending that class, I will change it to a "final" class in the code itself. Then I will rerun our tests and lints. If I am wrong, eventually an error or exception will be thrown, and this will be noticed. But if that doesn't happen, the next programmer who comes along and wonders if anything extends that class (probably me) will immediately find the answer in the code, the class is final. This drastically reduces possibilities for what is possible to happen, which makes it much easier to examine the code and refactor or make necessary changes.
Another example is often you come across some legacy code that seems like it no longer can run (dead code). But you are not sure, so you leave the code in there for now. In harmony with this article, you might log or in some way monitor if that path in the code ever gets executed. If after trying out different scenarios to get it to run down that path, and after leaving the monitoring in place on production for a healthy amount of time, you come to the conclusion the code really is dead code, don't just add this to your mental model or some documentation, embed it in the code as an absolute fact by deleting the code. If this manifests as a bug, it will eventually be noticed and you can fix it then.
By taking this approach you are slowly narrowing down what is possible and simplifying the code in a way that makes it an absolute fact, not just a theory or a model or a document. As you slowly remove this technical debt, you will naturally adopt rules like, all new classes must start out final, and only be changed to not be final when you need to actually extend them. Eventually you will be in a position to adopt new tools, frameworks, and languages that narrow down the possibilities even more, and further embedding the mental model of what is possible directly into the code.
by jerzyt on 4/10/20, 6:08 PM
by drvortex on 4/10/20, 6:51 PM