A piece of software is said to be encapsulated if someone can use it without knowing its inner workings. The software is a sort of black box. It has a well-defined interface to the outside world. “You give me input like this and I’ll produce output like that. Never mind how I do it. You don’t need to know.”
I think software development focuses too much on logical encapsulation. Code is logically encapsulated if, in theory, there is no logical necessity to look inside the black box.
Maybe there would be no need to look inside the code if everything were working correctly, but there’s a bug. Or maybe the code isn’t buggy so much as incompletely documented. Maybe your inputs push the box beyond its designed limits and you don’t know that until you use it. Or maybe the code is taking too long to execute.
Maybe there’s nothing wrong with the code, but you don’t trust it. In that case, the code is logically encapsulated but not psychologically encapsulated. That lack of trust negates the psychological benefits of encapsulation. This may be the most insidious breach of encapsulation. A failure of logical encapsulation is objective and may easily be fixed. A loss of confidence may be much harder to repair.
Developers may avoid using a component long after bugs have been fixed. Or they may use the component but be wary of it. They don’t concentrate their full mental energy on their own code because of a lack of trust in their dependencies. When a bug shows up in their code, they may waste time checking the untrusted component.
Psychological encapsulation might explain in part why people are more productive using tools they like. For example, Windows runs well for people who like Windows. Linux runs well for people who like Linux. Macs run well for people who like Macs. It’s as if the computer knows who its friends are.
Some of this is confirmation bias: we’re less likely to notice the flaws in things we like and more likely to notice the flaws in things we don’t like. But I think there’s more going on. If someone is using an operating system they distrust, they’re going to be less productive, even if the operating system performs flawlessly. Every time something goes wrong, they suspect it might be the operating system’s fault. Of course an error might be the operating system’s fault, but this is rare.
Related post: Opening black boxes
That’s right, it reminds me to Joels Spolsky’s article.
I think that one of the big issues with psychological encapsulation is the choice of sensible defaults for both inputs and outputs, which make working with the software and trusting it easier.
An issue I often face with R, for example, is that the default for many packages is to avoid processing data if there are missing data. I find this very annoying because I always have missing data. Another issue is output with too many decimal places. There are fixes for both issues but I struggle with those choices and make me trust less the system.
Reminds me of talks about “compiler errors”.
I’m reminded of Seth Godin’s recent blog post on Assuming goodwill. That made me think about whether ‘psychological encapsulation’ makes sense with respect to the people we deal with, not just programs.
Using a good test suite like RSpec and writing good and thorough and detailed tests provides a level of reassurance everytime the tests run and nothing fails.
See “Test Driven Development” TDD.
http://en.m.wikipedia.org/w/index.php?title=Test-driven_development&mobileaction=view_normal_site
More specifically, you should be able to look at a chunk of code that calls and API and have it be decidable (by machine in principle, or at least by passing tests) whether it is being called correctly; without having to invoke reasoning about behavior that isn’t explicitly documented in the interface. If some apparently correct code fails due to implementation details, then it’s the documentation (including types, preconditions and postconditions) at fault or the implementation not living up to the documentation.