It’s not enough for software to be correct. It has to be defensible.
I’m not thinking of defending against malicious hackers. I’m thinking about defending against sincere critics. I can’t count how many times someone was absolutely convinced that software I had a hand in was wrong when it in fact it was performing as designed.
In order to defend software, you have to understand what it does. Not just one little piece of it, but the whole system. You need to understand it better than the people who commissioned it: the presumed errors may stem from unforeseen consequences of the specification.
Related post: The buck stops with the programmer
John,
This is absolutely correct. No one understands this better than software/firmware expert witnesses. In order to put up a defense / explanation of your client’s software, you often come to a better understanding of how it works, and why it works (or doesn’t!) than the client.
I’ve also seen my share of software that, uh, “backs into correctness” – put another way, it just happens to work (sometimes even under all circumstances, not just most of the time), but accidentally, in spite of itself.
Ah, fun stuff.
I wrote about this topic a few months back here http://blog.noblemail.ca/2012/05/analyst-measure-thyself.html
In short I think the key to defensible software is to give the attacker the tools that would demonstrate that it’s not working. If you answer the question of “how do I know this works?” with “this is why it should work” you’re failing. Whether you call the demonstration validation, or verification, or testing, this requires understanding what the software is supposed to accomplish.