by ripitrust on 6/17/15, 7:49 AM with 39 comments
by d--b on 6/17/15, 10:50 AM
In my opinion, the problem it describes come from the vagueness of the concept of elegance, and how counterintuitive it is. It doesn't even have to imply modules or anything like that. In the simplest form it boils down to:
Would you rather write:
if (a) {
doSomething();
if (b) {
doSomethingElse();
}
}
or if (a && b) {
doSomething();
doSomethingElse();
}
else if (a && !b) {
doSomething();
}
Of course, there is no true answer to that question, and it always depends on the context. But many programmers will never ever consider option 2. And that is for two good reasons: 1, you make one more test, and 2 you duplicate the function call do doSomething(), so your program is larger. So mathematically speaking option 1 is more elegant, it's shorter, it's faster, it's lighter, what's not to like?Well, multiply the ifs and the elses, and you will soon find out that option 2 is much more readable and changeable, which is the more elegant solution to anyone who's an engineer rather than a mathematician.
The tension it describes is that in every programmer is a mathematician and an engineer endlessly conflicting. This should be a good guidance for which style to use. Is this code a math code, or is it engineer's code. When you can answer that question, you can decide which style to write your code in.
by rsp1984 on 6/17/15, 11:09 AM
What the article misses to address explicitly however is that the whole redundancy vs. dependency conflict is caused by modularization. Without modules there would be no conflict.
So the real questions to answer here are: When do you need modules or do you need them at all? What should be modularized? And, most importantly, how to choose smart boundaries? Good answers to these questions will save a project from a world of pain down the road.
The classic OOP / software engineering education these days lacks critical debate about software modularization. Modularization is almost always presented as a good thing. What nobody tells you however is that in real world engineering, on real world teams, modularization can cause a lot of trouble if not done the right way.
by tel on 6/17/15, 1:03 PM
Of course, this is a situation that's highly incompatible with C. Let's ditch that.
In ML modules are king. You probably make hundreds in any non-trivial program and the compiler will beat your ass if you muck up their interfaces. Anywhere. Packages are just sets of 3 public modules wrapped up in twine and a README file (coincidentally this is where "ownership and lifecycle" are managed, but, sorry, I'm going to ignore those for a moment).
This could be every bit as bad as I described before, but ML also realized that modules which just form a big dependency tree are actually quite annoying. The whole reason we define public APIs is so that there can be multiple satisficing inplementors, but this cannot be in 99% of module technologies today.
So ML has functors (not Haskell functors, certainly certainly certainly not C++ functors) which are "parameterized modules that actually work". One could distribute their command line parsing module with a pluggable serialization and a pluggable help display. See MirageOS for a giant example of this kind of system working out.
Does it really work?
Probably not. It's not in most maintainers DNA to functor-ize everything. It's even a significant challenge to do so since you need to define sufficient external and internal public APIs and it's a significant community effort to standardize these sufficiently so that there is significant chance of re-use.
But at least it's a way forward. Fight the heavy module trees. Let's use some higher order reusability.
by sbov on 6/17/15, 5:48 PM
E.g. we have client and server code. Serialization configuration between the two is implicitly dependent upon each other - if the client expects dates in a different format than the server, things don't work. To make that implicit dependency explicit, we use a module, which also has the affect of making sure the two don't get out of synch.
by tempodox on 6/17/15, 12:44 PM
I agree with the OP that commonly, “dependencies are worse”. Redundancy will increase the quantity of your code, but dependencies increase its complexity. And quantity is always conquered easier than complexity.
by guard-of-terra on 6/17/15, 1:36 PM
Perl&Ruby are even more eager, which should be strange since they're actually less safe.
by rwallace on 6/17/15, 5:19 PM
If anything, larger modules like the ones I listed are more likely to be worth depending on because they do more. It's no coincidence that the author chooses command line parsing as a negative example - something trivial enough that the overhead of tracking a dependency may well outweigh the effort of implementing it yourself.
by michaelfeathers on 6/17/15, 12:56 PM