Here’s a quote from Bricklin on Technology regarding what colleges should teach in software engineering. (I added the bullets.)
For years we emphasized
- execution speed,
- memory constraints,
- data organization,
- flashy graphics,
and algorithms for accomplishing this all. Curricula need to also emphasize
- robustness,
- testing,
- maintainability,
- ease of replacement,
- security, and
- verifiability.
The criteria in the first list are primarily mathematical. The criteria in the second list have more to do with human nature. For example, code is maintainable if it’s organized so that a person can readily understand and modify it. That’s a matter of psychology.
More projects fail due to problems with the second list. Problems with the first list tend to be localized. Problems with the second list tend to permeate a project. A clever person may have a quick fix for problems with the first list. Quick fixes are rare for problems on the second list.
Excellent article. I’m in absolute agreement. Since I’m in training to be an algorithmist, I can’t quite say that it’s a good idea to ignore the first list :-) (with the exception of graphics; I really don’t care about visual UX, because there are people far better at it than me). But I agree; algorithms are best implemented as a module.
My experience as a software engineer has taught me that every single item in the second list will cost time. These are all items that must be considered from design through maintenance, but are too often attempted as turnkey implementations or duct tape/spackle/bailing wire at the last minute.
Great point. However, robustness is commonly addressed mathematically. Something like “rate of failure”. We probably could model mathematically all elements on the second list.
It seems to me that the first list describes “core Computer Science” whereas the second list is more about “good software design”. Don’t expect a Computer Science professor to know about designing good software. This requires practice, and Computer Science professors typically don’t write *any* software. Ever.
Or if they do, it’s a one-off that doesn’t ever get integrated into anything, and thus doesn’t need to meet the restrictions noted in the second list.
There are curricula that attempt to address these issues through teamwork and long-term projects. Unfortunately, there are software engineers (thankfully in the minority) who never learn these principles in the workplace either.
I agree. And your point that items in the second list have more to do with psychology and human nature is important. It is people that commission, manage, design, build and use software; every aspect of software creation is impossible without people, yet students of software are taught little or nothing about how they “work”. Indeed they are predisposed to be disinterested, even antagonistic, towards any efforts to teach them psychology let alone soft skills. Interestingly there was a similar problem in Economics (i.e. it was dominated by a narrow mathematical world view) but Behavioural Economics is changing all that. Will there be a similar movement in Computer Science and Software Engineering? I believe it is overdue and academia could provide leadership in this area.
Problems on the second list must be considered during the dev of the project. They have to be refined each time you get a better understanding of the project.
I see a lot of project wanting to go forward without considering it. You end up with a unmaintainable mess. You have to solve these issue little by little each day, mainly by fighting against entropy.
Curious on your rationale on “verifiability” and “testing” being on the 2nd list. Always thought those were ones a bit easier to solve and not ones that permeated the entire project. Clear or clarified requirements and then either good-quality QA folks, or good-quality automated tests don’t always require a re-work of the whole project – though (depending on scope of project) I’ll concede that automated testing can.
The practices I have been taught, for dealing with items from the first list, have typically been based on dead machines, and as recently as this month I have had to point out basic issues having to do with efficiency to professionals who should know better.
Put differently: performance optimization is often not a valid activity. When it is a valid activity, it needs to be based on careful measurements (as opposed to big O). Simplicity is almost always a more important issue (and, in the hot spots where efficiency matters, system simplicity — which includes having good tests for important features — is what makes addressing the hot spots possible).
Put differently, the first rule of optimization is “Don’t”. You make things fast by not doing things that you do not need to do. But optimization itself can be one of these things. And often I have seen people doing “extra work” in the name of optimization (but with the practical consequence of slowing things down).
not only psychology. consider the logical and structural basics too.
While I agree in general, I would emphasize “simplicity in design”. Designs are becoming increasingly (and unnecessary) complex. This is also related to human nature: we are creating more and more frameworks loaded of semantics that pertain more to the solution domain than to the problem domain.