Michael Fogus posted on Twitter this morning
Computing: the only industry that becomes less mature as more time passes.
The immaturity of computing is used to excuse every ignorance. There’s an enormous body of existing wisdom but we don’t care.
I don’t know whether computing is becoming less mature, though it may very well be on average, even if individual developers become more mature.
One reason is that computing is a growing profession, so people are entering the field faster than they are leaving. That lowers average maturity.
Another reason is chronological snobbery, alluded to in Fogus’s second tweet. Chronological snobbery is pervasive in contemporary culture, but especially in computing. Tremendous hardware advances give the illusion that software development has advanced more than it has. What could I possibly learn from someone who programmed back when computers were 100x slower? Maybe a lot.
There is a lot to learn from past masters of computer programming — how to write efficient programs, how to design and implement software with low initial defect density, how to approach testing to capture and remove most bugs before they escape, and so forth. However, many of today’s developers are more interested in other goals: How to develop a mostly-working product as fast as possible, how to use the latest UI design styles (or fads, if one is willing to be blunt), and how to design software so that updates can be rolled out with minimal user intervention. Only the last of these is truly novel; the other two have been addressed in the past, and sometimes language or system architects apply lessons from that community experience, but I think that much of it gets discarded in favor of appearing new.
On the other hand, modern hardware behavior does bear on received wisdom — for example, when CPUs were 1% their current speed, memory was perhaps 5% or 10% its current speed, and this can change the constant factors for some algorithms so that what was faster before is now (much) slower. A good example is presented by Bjarne Stroustrup. (As always, the usual rules of optimization club apply.)
quite precisely the reason I used to hack together some of stephen wolfram’s 1d artificial life algorithms, a mandelbrot and other things on a c64 in assembler. no matter if it’s a 1mhz 6510 processor, a zilog z80 or a motorola 6800 or even an early x86 machine, anything will do. you should try going retro and giving the machine even more ways to misunderstand your intentions and make it train your discipline on that. on these machines you begin to get grip on how these machines make software out of zeroes and ones. the making something out of what seems nothing is not only a lot of fun, but will also inspire you to get creative in the way you do it, especially if things get slow or not the way you thought.
MAR771, I used to do 6502/6510 programming on the C64, and prior to that on the VIC-20. On the VIC, you had 2.5k of RAM, so you learned to code like a contortionist. There was also ~864 bytes in the cassette buffer that you could use if you weren’t going to load anything. You learned where every free *bit* was in the machine that you could use. Zero page (the first 256 bytes) ran faster because it had only one byte for the address, which on the 1MHz machine was a factor. Of course there were only 2 or three user bytes in zero page, but if you masked, you could use the top bits of some of the other bytes. The point is both time and space were at a premium. You couldn’t throw faster/more hardware at the problem, which is the norm now. Coders don’t come out of school with any concept of limitations, so they naturally write inefficient code because they can do it and get away with it. It’ll be “good enough.”
Cliff Hall, if memory consumption — or bandwidth or CPU time or whatever other metric — only bothers a tiny fraction of users, how much time should a developer spend reducing that at the expense of other goals? Is it almost always better to deliver a product two weeks later but use half the memory, or provide fewer features with markedly lower CPU utilization? I would say that either decision could go either way, depending on what the “customer” wants.
The only hard optimization decisions are the ones that involve significant trade-offs. If a developer can improve one metric without measurably impacting others (among the set of metrics that the developers, users and buyers care about), we should call that a good practice rather than optimization. Thus, immaturity is a concern because experience tends to improve a developer’s estimates of which metrics are important and how development decisions impact those metrics; experience and training tend to improve a developer’s “toolkit” of implementation alternatives, with associated performance estimates.
Michael P wrote: “There is a lot to learn from past masters of computer programming”
One thing one can learn is that different tradeoffs lead to different optima. In the days of very expensive, relatively low performance computers, Mel-style programming made much more sense. (Lower career mobility may also have increased tolerance for less maintainable code.) The expected lifetime and popularity of the programming interface are also factors; for early, machine language programming the likelihood of a future machine not being binary compatible would reduce the importance of maintainability.
Learning that tradeoffs change is important to becoming a good programmer. Hardware changes (as Michael P noted for the cost of memory access versus computation), available software changes (e.g., buy/search-for vs. make), and intended use varies and changes. If one writes a single use script with the same methods one uses for a 500k LOC enterprise-critical program, one is probably doing it wrong. Changes in use, including unexpected potential reuse, make choosing a programming methodology more difficult.
For maintainability, there seems to be a tradeoff comparable to the clutter-discoverability tradeoff for user interfaces. If one accounts for every conceivable way of extending or modifying a program (making it in some ways more maintainable), not only will development time unnecessarily increase and performance likely decrease (abstraction penalty) but maintainability might suffer from the unnecessary complexity of the system and from unexpected changes breaking the conceptual unity of the design.
I very much sympathize with the desire to efficiently use computer resources (in part from being a computer architecture enthusiast—placing excessive importance on my own knowledge of computer hardware—and in part from a mindset generally opposed to waste). However, just as the first rule of Optimization Club (“do not optimize”) is easily misused, the lamenting of inefficient use of computer resources can be misguided, not recognizing that other resources have costs and time-to-solution includes programming time.
The lesson that there is never time to make a program maintainable can also be applied to performance optimization. Refactoring for performance can be just as expensive as refactoring for maintainability. Piecemeal performance optimization can have effectiveness limits similar to piecemeal maintainability optimization.
Caveat: I am not a programmer, though I have read a little of the theory and browsed a tiny amount of source code.
I’m not sure that things have gotten less mature, but things are not any more mature than they were 30 years ago.
Even today, ten thousand line methods are being created, O(n^2) data structures being used, and major language features (in any language) are not being used.