Can better tools make you less productive? Here’s a quote from Frequently Forgotten Fundamental Facts about Software Engineering by Robert Glass:
Most software tool and technique improvements account for about a 5- to 30-percent increase in productivity and quality. … Learning a new tool or technique actually lowers programmer productivity and product quality initially. You achieve the eventual benefit only after overcoming this learning curve.
If you’re always learning new tools, you may be less productive than if you stuck with your old tools a little longer, even if the new tools really are better. And especially if you’re a part-time developer, you may never reach the point where a new tool pays for itself before you throw it away and pick up a new one. Kathleen Dollard wrote an editorial to this effect in 2004 entitled Save The Hobbyist Programmer.
Miners know they have a significant problem when the canary they keep with them stops singing. Hobbyist/part-time programmers are our industry’s version of the canary, and they have stopped singing. People who program four to eight hours a week are being cut out of the picture because they can’t increase their skills as fast as technology changes. That’s a danger signal for the rest of us.
So what do you do? Learn quickly or change slowly. The first option is to commit to learning a new tool quickly, invest heavily in up-front training, and use the tool as much as you can before the next one comes along. This is the favored option for ambitious programmers who want to maximize their marketability by always using the latest tools.
The second option is to develop a leap frog strategy, letting some new things pass you by. The less time you spend per week programming, the less often you should change tools. Change occasionally, yes, but wait for big improvements.
5 thoughts on “Better tools, less productivity?”
I’m a full time professional programmer, and it’s a fantastic amount of work to try and keep up with technological change. I think your leap frog strategy is the way for everyone really. I need to see a decent user base and positive experience before learning something new.
Most users of a software technology (including me) are at the beginner end of the spectrum. But most software seems to be designed with power users in mind. I think this is a problem.
I work in the Microsoft world. A few years ago VB6 and an Access Database was a popular platform for the hobbyist programmer. It let folks knock something up which probably did an OK job (but boy was it ugly). I now work with C# and the .net framework, and I think it’s really good. I’m not sure I’d recommend it to a hobbyist though, there just so much of it. In the MS world there’s no real successor to VB6 and Access.
If you want to knock up a website, your only real choice as a hobbyist is php. You can download a virtual machine with a fully configured stack (OS, DB, Web Server and Php), and get going. There’s lots of stuff written in it like WordPress and Wikipedia, and you can probably find a content management platform you could start with, and then just hack a bit as necessary. But again, it’s ugly. You could try Rails, and you may well get much better code, but as a hobbyist I think the conceptual leap to using Ruby may just stop you getting started.
It’s almost like, simplicity for the beginner (and hence widespread popularity), leads to an inherent ugliness as the software grows larger.
Leapfrog would be the best way to go because you definitely don’t need to master every new tool that enters the market. And does this seem just the case with tools created by Microsoft? Don’t think tools like those for C, Python, etc change so fast and so much.
Yes, yes, yes ! I am very fortunate to work at a university where keeping up with, supporting and evangelizing is my job. I program probably 15-20 hours a week on all sorts of projects. However, when I look at new technology, whether a new version, new language, whatever, often my initial response is often “You have GOT to be &*(ing kidding me?!”
I am pretty smart (four degrees) and have buckets of experience but when I look at something that would take a solid week of my time just to figure out the installation, it often goes on to the spaceship pile, that is, the things that will get done after finishing the spaceship I am building in my basement – and first I will have to dig a basement.
Once a week or so I listen to earnest vendors wanting to show me the lovely graphics or fantastic data mining capabilities of their software and try to gently explain to them that first I need to get the #$%ing thing to work, to read in my data from SQL or SAS, to connect to our high-performance computing cluster and I do NOT have a week or two or three to do nothing but mess with it.
I generally am the person who likes to be using the latest version of whatever software (especially at the beginning of a project). But really, I think there are only three times when one should consider new technologies: the first is when the tricks and techniques you have learned with the old can no longer keep up with a novice user of the new technology; when your starting a new project long term project and you have a little time to trade the newer approaches against each other and the old approach; and finally if the old technology is costing you in diminishing sources.
As someone who has programmed in excess of 8 hours/day for the past 40 odd years I like to keep up with new stuff, but never got around to dotnet until recently. I installed VS 2010 Express as recommended, but to be honest I find the interface ugly and extremly kludgey and slow compared to VS 2005 that I was using previously.
If anyone has a tip for making the MMI look and respond like VS 2005 I’d be grateful, I’m wasting a lot of time just trying to select lines the cursor positioning is very finicky.