The word myth brings up images of classical mythology. From there it can be generalized a couple ways. One is any story that is not true. Another is a story, whether true or not, that embodies a system of belief.
Sociologists use myth in the latter sense when they speak of “the myth of progress.” They are not suggesting that progress does not happen. Instead they are giving a name to the faith that things are always improving.
The myth of progress runs deep in software development. For example, when a software package says it requires XYZ version 4.6 or better, there’s the implicit assumption that later is always better, and better by all criteria. It’s hardly ever the case that a new version has more features, is easier to use, has fewer bugs, is more secure, requires less memory, runs faster, and costs less money. At best, the new software is better by the criteria that most users find most important.
The myth of progress relieves some of the burden of thinking. It’s easier to assume newer is better than to ask whether the new thing is better for me, now, for my work, etc.
Companies profit from the myth of progress by selling new versions. For this reason, the myth of progress may not be as strong in the open source ecosystem. For example, it seems Linux developers have more respect for old software than Windows developers do.
Consider the text-processing utility
sed, written by Lee McMahon around 1973. Linux users today think nothing of using a utility over 30 years old. Windows developers, on the other hand, look down in disdain at anything three years old.
You could argue there’s no reason to use
sed. AWK is more powerful than
sed, and Perl is more powerful than AWK, so
sed (and AWK) should die away. So why does
sed still exist? Because AWK and Perl are not uniformly better. They have more features and are useful for a wider variety of problems, but
sed does what it does very well.
sed can do some common tasks in just a few keystrokes that would require more code in AWK and even more code in Perl.
In Windows development, there’s a sense that the old must die for the new to live. VB.NET has to replace VB 6, for example. But there’s no corresponding sense that
sed has to go away for AWK or Perl to prosper. Instead, AWK and Perl carefully—I’d almost say lovingly—preserve
Maybe Windows developers have a stronger sense of progress because Windows really has seen more progress. Unix began on the high-end systems of its day and has worked its way down. Windows began low-end hardware and has worked its way up. Unix was written for scientists and engineers and later became more of a consumer operating system. Windows was written for consumers first and more recently has tried to appeal to scientists and engineers. Given this history, it’s not surprising that the Unix community might want to preserve more of its early development. But it doesn’t follow that because Windows NT was a huge improvement over Windows 3.1 that Visual Studio 2010 is necessarily an improvement over Visual Studio 2008.
Related post: Negative space in operating systems
38 thoughts on “Software development and the myth of progress”
The sense of “the old must die so that the new can live” is alive and well in FOSS as well. Witness multiple re-writes of the Linux audio stack, the creation of multiple and competing web development frameworks and database technologies, etc…
That reminds me of the recent upgrade I had to my skype program. The newest version has a pop up that directs to their website. There is no way to turn this feature off in the new version. The older version didn’t include this annoying pop up… I have recently turned skype off of my instant start up programs.
I couldn’t help but laugh, in light of your last post, that this one starts with a definition. But you went on to use it honestly, so we’ll let it pass.
Jason: Good catch. I thought about that as I was writing the post. But I think I’m being objective here describing how other people used words rather than making up a definition.
Hmm… agree with the data points, not so convinced by the causation, at least on the long-lasting side. If you look around your Linux system at the tools, like sed and grep, that have lasted, I think it supports the theses of “do one thing well” and “allow commands to be chained together” than any myth defying. Windows has never had a strong history of piping outputs to inputs and have tended towards more monolothic suites than single purpose tools. It’s easier to get the latter correct. As one moves up the scale, more alternatives to the obviously correct thing exist and more errors in design are made, requiring or desiring rewrites and multiple implementations along the lines that Andrew pointed out in his comment (none of which, I’ll argue, are particularly supporting of the myth of progress, either, since many of them are “release early, release often, sometimes throw away” or legitimate alternatives).
tldr version: Cook is old enough now that he hates learning new things.
The fascinating thing about software is that it’s possible to have far more features than any physical piece of hardware. With ordinary tools such as cars, airplanes, torque wrenches, etc. there are rational limits to the functions you can pile on them. This naturally causes attention to return to the meta-aspects of the tools: performance, robustness, usability, comfort, etc. Car makers don’t just try to add more and more features to a car, they hone it into a well fitting niche in the market, they craft it so that it’s well balanced, they tune the performance so that it’s competitive within its class, they spend a lot of effort on fit and finish so that people will enjoy driving it. The same is true for every other tool. As much effort goes into refining a given tool from generation to generation as goes into bolting on new capabilities.
However, software is a different realm, in software there are no similar practical limits to the features you add to a given piece of software. You don’t have to just create an jumbo commuter jet or a sport-tuned luxury town car you can create a Frankensteinian monster that does anything and everything. And if you con your customers into thinking that more features equals a better product then you can make more money that way too. There’s a tendency in the software world to look at software as a laundry list of features and to look at software evolution as merely changes in that list. In reality the fit and finish, the meta-features like performance and robustness are just as important, but because so few software companies understand how to deal with those aspects let alone deliver on them competitively the state of the industry is a mess. Incidentally, this is why Apple is so dominant in its markets, because it not only prioritizes overall quality above mere feature-set-itis but it is also extremely capable of delivering on quality to a level higher than much of its competition.
Incidentally, Visual Studio 2010 was probably a significant factor in my burnout that led to me quitting the software consulting industry completely.
It strikes me as a little strange that you have focused on the IDE. There isn’t that much difference between 2008 / 2010.
I think it is more interesting to critique the difference between versions 3.5 and 4.0 of the .NET framework. I expect that 3.5 is an exact subset of 4.0, therefore assuming no actual bugs we must be able to say 4.0 is at least as good as 3.5.
Following from that, I can suggest many useful language features and new classes that are extremely useful. System.Threading.Tasks.Parallel and Systems.Collections.Concurrent make it worth the trouble of upgrading alone.
Finally, back to the IDE, I haven’t really thought about it, but I have found that 2010 is faster with XAML editing and the performance wizard / profiler in 2010 Professional is very good (obviates the need for ANTS or similar).
when a software package says it requires XYZ version 4.6 or better, there’s the implicit assumption that later is always better
Here is the thing called backward compatibility. Newer versions should have all features of older, plus something new. Violation of this principle will cause displeasure.
But generally yes, it is a feature of all our modern civilization. We are not satisfyed with what we have, even when it is good enough. We always crave for somethig new and extra. As Fathers of the Church say, there is some damage in human nature wich disturbs and tangles our ways of life.
“Windows developers, on the other hand, look down in disdain at anything three years old.”
LOL, try Mac development sometime. See how well your three-year-old code works there.
rj: I mentioned Visual Studio because that was one of my biggest disappointments. If you follow the link, you’ll see why. And if you look through the comments, you’ll see that many people find VS 2010 to be slower. Some say it’s not that bad, but nobody says VS 2010 is faster.
VS 2010 is better for many developers, e.g. those developing WPF applications. VS 2010 is not an improvement for me, at least not when I’m writing C++. Maybe when I’m writing C#.
Vista might have been a better example. In my experience, Vista was far worse than XP. As with VS 2010 versus 2008, I didn’t find the new features worth the slower performance. And this goes back to my point: “better” is in the eye of the beholder. Some people found Vista better than XP. I didn’t.
By the way, I do find Windows 7 to be better than XP overall, though not in every detail.
I agree… C++. Java and C# are all a pain in the AZZ! If you just write carefully and intelligently in C, all is much more readable and life is good!
A good article – but I can absolutly verify that VS2010 is a big improvement over VS2005/8 but as you say, based on the criteria I find most important.
DrewCrewOf2: I actually prefer C++ over C, at least the subset of C++ that I’ve carved out for my own use. But I can see the advantages of plain old C in some circumstances.
I used to think “C++ is a superset of C, therefore C++ is better.” I now see that was naive. I do think C++ is better for me, given my experience, my personality, the kind of work I do, etc. But I’m aware of the costs of the extra features and understand that for some people the costs outweigh the benefits.
Perhaps vendors should say version 4.6 or bigger. :)
I’ve always seen it “version 4.6+” or “version 4.6 or higher” but never “better”.
though it reminds me of a bumper sticker i saw “the box said requires windows xp or better, so i installed linux”
i like the linux creed of “avoid swiss army knives: do one thing and do it well” sadly that doesnt translate well into the corporate model. people there want their software to do EVERYTHING, and will lynch you if you dare suggest they bring up a command prompt.
if feature a is done 100x better on program y but program x has feature a AND a gui, program x wins in the corporate world. and if program x does feature a,b,and c(albeit worse than programs y,z, and w would on their own) then its considered “better”
i sometimes hate how software tends to be a list of features with no thought into how well it does them. i see this in windows all the time. i see “oh hey xp can do sym links like linux” then i find out “oh…junctions arent exactly like a sym link…its like a sym link’s retarded cousin” i seen .net could do piping between programs, that is it has something like popen and it can capture and read from stdout of that program and write to stdin, “sweet i can make these two programs talk to each other.” i think.
then i find that yes it can do that, but it cant poll on file access, and stdout is considered file access. so you dont know if this next read is going to make your program wait on the other program who is waiting on you. deadlock ensues.
and no one is as guilty of this is IE, you think “oh hey they support this feature that every other browser has had for a decade now, cool…wait…what? thats not supposed to look like that”
I think the difference in “progress” on Unix vs Windows is simple:
Unix started off (necessarily) as a simple system with simple solutions – the emphasis was on functionality with no extraneous stuff like user interface to get in the way – the resources for junk simply weren’t available. From an early point in its development, apps were developed by the people who actually used them, often cooperatively across the community, and tweaked to do exactly what they needed to do, and nothing more. They quickly stabilised into “ideal” applications.
Windows (as the name suggests) started off being almost all about user interface with very little functionality. The apps were developed by software houses in ivory towers and shipped to the unwitting users, who put up with 99% of the computer resource being wasted on an eyesore of a hobbled together GUI (it was contemporary with the Mac, which had a beautiful well designed GUI, so there’s no excuse). To sell more apps, the developers whacked in more and more extraneous junk in a competitive arms race. The apps all went one of two ways: The early apps (and the early windows) were terrible, so upgrades often were an improvement (Win95 finally caught Windows up to the MacOS of ten years previously). And conversely, many apps became bloated and slow, and thus less good.
So on Unix, to an extent, there are no new versions – everything already works as most of the users want it. On Windows, new versions are often upgrades, because the old versions were so crap and missed the mark of what users wanted. With Word 2007 on a 3GHz system I finally have a Windows application that is almost as fast and usable as the word processor I used in 1987 (Impression on 8MHz RISC OS – it had WYSIWYG antialiased vector fonts, styles, master pages, a spelling checker and all the core features most people need for word processing, but none of the junk).
Refinement over embellishment? As a lowly web developer, I have a hard time telling clients with biggish sites that the site would be improved by removing underused features and sections rather than adding new ones.
I have also found it fascinating that in the early 90s, excel would run on a 486.Now it requires a couple of gigs of memory. Sure there are plenty more features but for the average punter who wants a spreadsheet and just a spreadsheet, all they are getting is bloat. I much prefer gnumeric to open office because of gnumerics speed and not being prompted to recover documents from 2 months ago.
Is the new osx abstracting of the file system refinement or limb hacking? If you are a programmer or developer probably b for the other 95+% of users probably refinement.
“Perfection is achieved perfection not when there is nothing left to add, but when there is nothing left to take away” – Antoine de Saint-Exupery
I did follow your link. It sounds like you had a painful install.
Personally, installing VS2010 took me about an hour or so. When I launch visual studio for the first time after a cold boot it takes about 5-10 seconds to load. Subsequent to that it takes 1-2 seconds to load.
In other words, I don’t even find it slow, in fact I would say it is fairly fast and no slower than 2008.
rj: It’s interesting that there’s so much variability in experiences with VS 2010. You’d expect a lot of variability in subjective responses, but apparently there’s a big variance even among people who carefully timed things, more than I would expect from hardware variation.
It is possibly something to do with the options installed. I only installed exactly what I use, c# and f#. I consciously unticked VB.NET, TFS, C++ etc when installing. This may help both the install and load times.
Also, The only plugin I run is AnkhSVN, I studiously avoid Resharper / PowerTools etc.
Otherwise I can’t explain it. I certainly have good hardware at home, but my work machine is 4yrs old and runs VS2010 very well.
I’ve tried out beta version of VS2010 and it was very slow. WPF projects were exceptionally sluggish. Apparently release version has better performance.
The main reason why sed is still being used is because the Linux world has not forced anyone to stop using it. This issue isn’t about respecting old software, or “the old must die so the new can live.” It’s “the old must die so Microsoft can live.” You briefly touch on this in the 5th paragraph, but to me it’s the whole issue. I’ve been using and developing Microsoft products since 1987, and I hate being strong-armed away from an adequate app or tool just so Microsoft can dominate.
Those of us who have to manage Windows systems do a lot of scripting: VBScript, JScript, Powershell, etc. … and sed. There is a suite of UNIX tools ported to Windows and I use them all the time.
g. pullam’s classic (2004! ancient history!)
“yesterday’s technology tomorrow”.
Interesting comment thread.
At least most older Windows software still runs on Windows 7. Apple dropped support for “classic” software a while ago, and they’re taking out the PowerPC emulator in 10.7.
Money making corporations spend money to keep the gloss on the Myth of Progress, otherwise their cashflow would dramatic plummet – aka built-in obsolescence, but since software doesn’t rust away, they have to let your brain rust. FOSS can focus on increasing value and reducing bug counts. When those hit the “acceptable” level the program “solidifies”. Doesn’t stop new ones with greater level of functionality pop up. The sed-awk-perl continuum is a great example.
Great post John!! Just before reading your blog, I was working on my software where I was using the latest packages of another software for image processing. As it turned out, the latest package was not at all incompatible and I had to revert to older versions to get my software working :).
Just because M$ has large room to improve
Thanks for this blog. Seems to me I used edlin under a DOS shell not two years ago, but I just checked and I don’t see it with Win7. But that got me looking at the latest DOS commands and now I know about ROBOCOPY. Maybe it’s less newness per se and more about the kind of ‘space’ in the user’s head. Those of us who tend to be very abstract are too busy thinking about these abstractions to care about the interface we’re using, or to make enough money to buy this year’s VS. Or maybe it’s where a person gets the adrenaline rush from…from the new, or from ‘pulling the lever’ one more time on our compiler to see if our ‘perfect plan’ worked.
Talking about AWK and Perl alone makes you look “old” to the ones that believe in that “progress”. AWK is roughly from the same time as sed, and Perl a mere 10 years younger, AFAIK. I’m 49 years now, use sed and GAWK (but almost no Perl) on Windows and can do in 20 minutes where “Windows Power Users” start up a project.
Problem under Windows is that there are only few tools that are that good. One that comes to mind is “zipping” – that ZIP format + tools seem to be immune to that “progress” thing.
If all you want to do is Unix-style programming as it was practiced in 1980 with C, SED, AWK, etc., download Cygwin and you can work on Windows pretty much the same way as I did back in the day on a PDP-11. (You might need to find a non-ANSI version of a C compiler if you like Kernighan and Ritchie’s syntax — their programs no longer compile.)
Presumably everyone here means “C + whatever libs I need”. By itself, I can’t even do threading in C or C++, much less parse XML with Unicode in it. Sure, you might not think I should be doing that, but what if I want to extract all that nice information NLM provides in XML format, like MEDLINE or EntrezGene?
Also, I thought much of the Linux device driver development was piggybacked by wrapping Windows drivers. Maybe that’s no longer the case.
As a Perl programmer, I agree with almost everything, but can not allow this:
“…would require more code in AWK and even more code in Perl.”
Perl is often shorter than awk. In some cases, Perl is slightly shorter than sed.
I’m not saying that Perl necessarily takes more code than awk. But if a task is in the sweet spot of what awk was designed to do, the awk code may be shorter or simpler than the corresponding Perl. I read an interview with Larry Wall where he says he does some tasks in awk.
VB6 is a poor example: the reason it should be deprecated is because it was broken from the start. It’s unstable, it doesn’t work well with version control and it doesn’t is a nightmare to unit test. So yeah, it needs to burn.
As for recent .Net updates: the last couple of framework updates have made life a lot easier…
I think it is just the way the business is done. Windows and stuffs for windows needs new versions to make money. Solve old problems and create a new one. Unix, on the other hand, works on different way. Who is wrong or right? I don´t know, but I make money with windows.
If you keep reinventing the wheel, you’ll never invent the automobile.