Software sins of omission

The Book of Common Prayer contains the confession

… we have left undone those things which we ought to have done, and we have done those things which we ought not to have done.

The things left undone are called sins of omission; things which ought not to have been done are called sins of commission.

In software testing and debugging, we focus on sins of commission, code that was implemented incorrectly. But according to Robert Glass, the majority of bugs are sins of omission. In Frequently Forgotten Fundamental Facts about Software Engineering Glass says

Roughly 35 percent of software defects emerge from missing logic paths, and another 40 percent are from the execution of a unique combination of logic paths.

If these figures are correct, three out of four software bugs are sins of omission, errors due to things left undone. These are bugs due to contingencies the developers did not think to handle. Three quarters seems like a large proportion, but it is plausible. I know I’ve written plenty of bugs that amounted to not considering enough possibilities, particularly in graphical user interface software. It’s hard to think of everything a user might do and all the ways a user might arrive at a particular place. (When I first wrote user interface applications, my reaction to a bug report would be “Why would anyone do that?!” If everyone would just use my software the way I do, everything would be OK.)

It matters whether bugs are sins of omission or sins of commission. Different kinds of bugs are caught by different means. Developers have come to appreciate the value of unit testing lately, but unit tests primarily catch sins of commission. If you didn’t think to program something in the first place, you’re not likely to think to write a test for it. Complete test coverage could only find 25% of a projects bugs if you assume 75% of the bugs come from code that no one thought to write.

The best way to spot sins of omission is a fresh pair of eyes. As Glass says

Rigorous reviews are more effective, and more cost effective, than any other error-removal strategy, including testing. But they cannot and should not replace testing.

One way to combine the benefits of unit testing and code reviews would be to have different people write the unit tests and the production code.

Related posts

12 thoughts on “Software sins of omission

  1. My experience when writing code test first, is that it actually help a lot with error conditions, and other sins of omission. The process just helps me think much more clearly about what can go wrong. It also forces you to right code in such a way that error conditions can be simulated. So I don’t entirely agree with you that Unit Testing (or a least TDD) doesn’t help with sins of omission.

    You are of course right though that Unit Testing won’t stop all bugs, so review and also exploratory testing are still very important.

  2. Mat, I agree with you that TDD could help reduce sins of omission. It’s easier to think about what needs to be done when you don’t (yet) need to implement it. But after-the-fact unit testing won’t help as much. If you write your code first, it’s natural to see your code base as your to-do list for writing tests. In that case you’re mostly going to write tests that correspond to code you’ve written, though a few more logic paths may come to mind in the process.

  3. Considering that it is difficult to test all the logical paths that the software can take, perhaps it is better to have beta testing by the users and then note down the issues that they faced and what improvements they would want. This can be done faster by prototyping or tracer bullet method to pin down as close as possible what the users actually need i.e what all logical paths are going to be used in the field. Once this is done, making the necessary implementations, reviews and testing would be much more easier because of their concreteness.

  4. The more I’m writing code, the more I believe this. I used to be skeptical of the suggestion that exceptions are bad… But, for exactly the reason you’re talking about here, I’ve now come to see the light.

    This is actually one of the features I really, really like about languages which are based on pattern matching: it’s a lot easier to notice when error cases aren’t handled. For example, it only takes a basic knowledge of Erlang to see that:

    case file:open(SomeFile) of
    { ok, File } -> print_lines_in(File).

    Is very obviously missing { error, _ } error handling.

    But, given roughly the same Python code:

    file = open(some_file)

    It’s a lot harder to see that an error handler is missing.

  5. You should try mutation testing, like e.g. Heckle does for Ruby. It’s amazing how many errors it can find, both sins of omission and commission. The tool is actually pretty annoying when you start to use it (I had a lot of reactions of the type “but why would you try to do that?”) but after a while you don’t want to work without it any more. Unfortunately, it doesn’t work for user interfaces :-(

  6. Thanks for this posting. It reminded me of a paper I wrote for Pentasafe a few years ago: Software developers are [usually] bad software testers.

  7. Interesting posting but a slight error I think in your definition:

    “The things left undone are called sins of omission; things which ought not to have been done are called sins of commission.”

    Sins of comission are usually defined as things which we know are wrong but we do them anyway ! It is the knowledge of its wrongness that is the key, I’m not sure that too many software bugs ever fall into this category – well I hope not anyway !

  8. Back when I had dreams of a PhD, I did a survey of many classifications of errors. 75% is on the high side for faults of omission. That’s not to say that I’m downplaying them – on the contrary, they’ve obsessed me for many years. But we should realize the % of faults of omission is pretty context-dependent.

    Regarding Jester and other weak-mutation coverage tools, I didn’t find them wildly successful at signaling faults of omission. Most of the current crop of mutation tools are fairly limited in the kind of program transformations they can do. My GCT tool did a lot more, but I concluded that was a blind alley.

  9. Having a high percentage of errors of omission does not strike me as surprising when we start from a blank state and add code until we seem to have finished. I would think that coding by modifying or otherwise reusing existing code would lead to a higher proportion of errors of commission (case in point: the failure of the first Ariane 5 launch.) Perhaps the difference between Marick’s and Grass’ figures, if statistically significant, can be attributed to different patterns of development over time or applications.

  10. Another challenge here is that requirements of software can change, but by default, the code remains the same of course. It makes me wonder how much of that omission is due to changing targets or even “creeping normality” where the target changes by small enough increments, that people don’t believe the code needs to be changed, while the end result is obvious that it does.

Comments are closed.