One of the differences between amateur and professional software development is whether you’re writing software for yourself or for someone else. It’s like the difference between keeping a journal and being a journalist.
People who have only written software for their own use have no idea how much work goes into writing software for others. You have to imagine a thousand things a user might do that you would never do. You have to decide which of these things you will accommodate, and which you will disallow. And when you decide to disallow an action, you have to decide how to do so while causing minimal irritation to the user.
GUI applications are particularly hard to write, not because it’s difficult to draw buttons and boxes on a screen, but because it’s difficult to think of all the ways a user could arrive at a particular state.
In between writing software for yourself and writing software for others is writing software for people very much like yourself. Open source software started out this way, alpha programmers writing software for alpha programmers. Since then the OSS community has gotten much better at writing software for general users, though it still has room to improve.
Related post: Software exoskeletons
9 thoughts on “Writing software for someone else”
I read something once that described software as a 2×2 matrix. One axis is “for me” versus “for others”. The other axis is “on my machine” vs “on other machines”.
Moving from the easy choice to the hard choice on each axis is something like 10x the labor, maybe more.
Thus, you can move from “It works for me on my machine” to “it works for me on any machine” with 10x the labor.
…and, of course, you can move to the far corner “it works for anyone on any machine” with 100x the labor.
Fred Brooks covered this in the Mythical Man Month in 1975.
Exactly writing GUI applications suck all the way, specially LOB ones not only how many invariants a form can have but also what expectations the user have, and the most important how many side things (different than the main purpose of the application itself) you have to do to satisfy the customer.
I wouldn’t say that writing GUI applications sucks, but it does require a different set of skills in addition to computer science, and it’s not the kind of work most CS majors imagine they’ll do for a living. They may imagine writing GUI apps, but they probably have no idea what that means in practice.
An artist is only really actualised as such when he releases his work, and is ready to face criticism. Yes, coding is an art.
I’ve found a large part of UI choosing to be defensive. The fewer degrees of freedom I give the user, the less likely the user is to do something “inventive” that blows up the application.
I’ve been writing professional software for others since 1983, when I was still a student. In 1986, I had to create software for doctors and medical emergency services in Geneva, Switzerland. I decided to create my own library for handling the entire user interface and – back then – even the database.
This library evolved through 2 major overhauls. It was debated in BYTE magazine in 1988, quite favorably, by Dick Pountain (“A Modula-2 Program Development Aid”, Oct.1988). By then, I had created a virtual windows environment under DOS.
The choice of Modula-2 over C is probably the main reason why I managed to write not only that library, but also over 100 major applications for DuPont, HP, Logitech, the Swiss Army and many other clients, relying entirely on my own library.
In 1994, I moved on to an Object Oriented Library for Oberon-2 – then the lates creation by Prof. Wirth, the father of Pascal. I immediately produced major projects for HP, including the entire Telecom’95 scheduling software.
I wrote a private banking customer data management application which is extremely feature-rich and for which I have received many positive comments from the users, who find the interface intuitive, flexible and powerful. The main constraint for that application is very detailed data access control down to the field level, per user group, to prevent the kind of violation of bank secrecy laws that have marred larger Swiss banks. The bank using my application, Royal Bank of Canada, Geneva, has suffered none of those problems.
Note that I am a very lazy programmer and always try to do everything in a minimal amount of highly readable and structured lines of code, because I hate wasting time trying to decrypt unreadable code.
e.g. the entire private banking application, after 22 years of development, fits into 37’000 lines of code and less than 5 MB of executable code and I can make changes to any section of the code at any time without worrying about horrible side effects.
Another application that is still in use after more than 20 years – after migrating from DOS to Windows in 1997 – can be found at the DuPont ballistic laboratory in Geneva. Every single ballistic protection device in use by police and the military around the world was tested via my software, which controls all the instruments, maintains the database and produces client reports and statistics. It is a critical business application. 4600 lines of Oberon-2 code.
Oberon-2 code rivals or exceeds C++ in execution speed, leaves it completely in the dust on compilation speed and code size and allows the creation of far, far more complex code due to an incredibly efficient garbage collector, which totally removes the problem of memory leaks. Even better, Oberon-2 code is modular and readable. The language can be learned in a day (definition fits into 20 pages of text) and does everythig C++ does, only better.
My library is called Amadeus-3 and it supports functions that still blow away people who think that they have used some very cool tools. It supports sophisticated user interfaces with minimal programming.
I felt it was time to remind everyone what an abysmal collective choice it was to adopt the pseudo-assembler syntax of C, which goes completely against the way the human brain handles pattern recognition. It’s like shooting yourself in the foot and it shows – lousy, buggy, hyper expensive software, except for very small, contained applications.
Ironically, the iPad Twitter App crashed just as I tried to post this. Fortunately, I expect crappy software and had saved my text before. So what caused the problem? A buffer overrun, a memory leak or another one of 20 totally preventable common bugs due to the lousy design of the programming language used?
For all those who think that notation doesn’t matter:
TRY TO DO MATHEMATICS WITH ROMAN NUMERALS!!!
There’s always a better way.
One problem in academia is that there is an utter lack of appreciation on the part of peer reviewers for the effort required to take an implementation of a new research idea and turn it into something that can actually be used by other researchers and practitioners. This fact is one important reason why researchers generate one-off programs for testing an idea for a paper and then let it languish, depriving others of the benefit of the knowledge embodied in the implementation. Another, of course, is that the researcher doesn’t really care or have the skills to write software for use by others.
My colleague Dan Warner points out that open-source software is probably the most effective possible technology transfer vehicle for computational science research results–far superior to papers written about those ideas. But that’s only really true if the software is written with that technology transfer mission in mind.
Good points John.
I would add another point on the spectrum from “writing software for yourself” to “writing software for *someone* else to “writing software for *everyone* else”. There’s another shift in mindset and practices when writing, for example, an app to be used by financial analysts at the bank you work for vs. writing Excel.