There are tools that I’ve used occasionally for many years that I’ve just started to appreciate lately. “Oh, that’s why they did that.”
When you see something that looks poorly designed, don’t just exclaim “Why would anyone do that?!” but ask sincerely “Why would someone do that?” There’s probably a good reason, or at least there was a good reason at the time.
If something has lasted a long time and has a lot of users, the designers must have done something right. Their choices made sense in some context. Maybe it wasn’t designed for you and your needs. Maybe you’re not using it as intended. Maybe the design made sense at the time but the world has since changed.
Technologists are bad at conveying motivation. We document, or reverse engineer, the what, but the why is much harder to come by. It’s not something people like to talk about.
Related post: Beethoven, Beatles, and Beyoncé: more on the Lindy effect
8 thoughts on “Why would anyone do that?”
That’s why I enjoyed the book of Stroustrup, “The Design and Evolution of C++”, although I’m not a programmer (and I hated the C++ class in university mainly because everything was completely unmotivated and no one questioned that — everyone just ooh! and aah!ed at the concepts presented).
On the other hand, some people just make stupid choices because they don’t know better, one shouldn’t look too deeply into reasons.
Hanlon’s razor says assume stupidity before malice. Maybe we need a maxim for assuming misunderstanding before that.
Particularly, that we are the ones doing the misunderstanding.
I think a lot about Hanlon’s razor. For example, I’m much more concerned with sloppiness than fraud in science, even though fraud happens.
I would think this is more akin to Chesterton’s Fence than Hanlon’s Razor. As someone that supports and has supported many legacy systems, Chesterton’s Fence is great guide to refactoring/fixing/maintaining code. Sometimes it is just junk, most times there is a reason.
One corollary: it’s valuable to document alternative approaches (e..g components, strategies, courses of action) that were considered and rejected–and the reasons they were rejected. This saves time later when people suggest the same alternative. And it speeds learning when you determine that others chose a road you did not take and were successful: determining the flaws in your analysis will improve your capabilities. I think this applies at an individual, team, and organizational level. Ackoff documented this as part of his “decision record model” (see https://ackoffcenter.blogs.com/ackoff_center_weblog/files/Why_few_aopt_ST.pdf ).
G. K. Chesterton posited the same idea in the culture: https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence
I suppose that’s also the point of cost/benefit analysis, isn’t it?