Superficial convenience

Here’s an interesting phrase:

superficial convenience at the expense of real freedom

This comes from the conclusion of the 1998 essay The Elements of Style: UNIX as Literature by Thomas Scoville. The author sums up his preference for UNIX culture by saying he prefers the “freedom and responsibility that UNIX allows” to the “superficial convenience” of Windows NT.

I’m not interested in arguing here the relative merits of Unix and Windows. I’m more interested in broader ideas that spin off from the quote above. When is a convenience superficial? How well does convenience versus freedom explain technological controversies?

I could see substituting “short-term convenience” for “superficial convenience” and substituting “long-term efficiency” for “real freedom.” But that may lose something. Thomas Scoville’s terms may be more nuanced than my substitutions.

Related posts

Third-system effect

The third-system effect describes a simple system rising like a phoenix out of the ashes of a system that collapsed under its own complexity.

A notorious ‘second-system effect’ often afflicts the successors of small experimental prototypes. The urge to add everything that was left out the first time around all too frequently leads to huge and overcomplicated design. Less well known, because less common, is the ‘third-system effect’: sometimes, after the second system has collapsed of its own weight, there is a chance to go back to simplicity and get it right.

From The Art of Unix Programming by Eric S. Raymond. Available online here.

Raymond says that Unix was such a third system. What are other examples of the third-system effect?

Related posts

Why AT&T licensed UNIX to universities

Here are  a couple details of UNIX history I ran across this week.

Why AT&T first licensed UNIX to universities:

At this time [1974], AT&T held a government-sanctioned monopoly on the US telephone system. The terms of AT&T’s agreement with the US government prevented it from selling software, which meant that it could not sell UNIX as a product. Instead … AT&T licensed UNIX for use in universities for a nominal distribution fee.

And why later they turned it into a commercial product:

… US antitrust legislation forced the breakup of AT&T (… the break-up became effective in 1982) with the consequence that, since it no longer held a monopoly on the telephone system, the company was permitted to market UNIX.

Source: The Linux Programming Interface

More Unix posts

 

A little Awk

Greg Grothaus posted an article today entitled Why you should learn just a little Awk. The article recommends taking a few minutes to learn only the most basic parts of the Awk language. I find this interesting for several reasons.

First, it is impressive what you can accomplish with just a few keystrokes in Awk. The language was designed for file munging and it does this very well. Many people, myself included, think of Perl as a language for file munging. And so it is, but I remember reading something from Larry Wall, creator of Perl, saying that he uses Awk for some tasks.

Second, Grothaus isn’t encouraging people to master the language. He’s saying to just learn a handful of features, at least to start. That goes against my grain. When I learn a language, I want to learn it thoroughly. On the other hand, I don’t have the time or energy lately to learn a new language on top of everything else I have going on. But I think Grothaus has a good point: if you just take a few minutes to learn only how to do several very specific tasks, it could be worth it.

Finally, I found it interesting to read a blog post about a language I haven’t touched in well over a decade. I used Awk in grad school for a little while, and was quite impressed with it. But someone suggested that Perl was similar but even better and I dropped Awk for Perl. Looking back I’d say Perl is more general than Awk, but not necessarily better.  Awk is quite good at the kinds of tasks it was designed for.

I’ve been trying to consolidate the list of programming languages I use after reaching programming language fatigue. Adding yet another language to the list of languages I haven’t mastered but use occasionally would not be progress. But Grothaus’ article tempts me to look at Awk again, not with the intention of mastering it but rather to learn how to do just a small number of things it does remarkably well.

Related posts

Where the Unix philosophy breaks down

Unix philosophy says a program should do only one thing and do it well. Solve problems by sewing together a sequence of small, specialized programs. Doug McIlroy summarized the Unix philosophy as follows.

This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.

This design philosophy is closely related to “orthogonality.” Programs should have independent features just as perpendicular (orthogonal) lines run in independent directions.

In practice, programs gain overlapping features over time.  A set of programs may start out orthogonal but lose their uniqueness as they evolve. I used to think that the departure from orthogonality was due to a loss of vision or a loss of discipline, but now I have a more charitable explanation.

The hard part isn’t writing little programs that do one thing well. The hard part is combining little programs to solve bigger problems. In McIlroy’s summary, the hard part is his second sentence: Write programs to work together.

Piping the output of a simple shell command to another shell command is easy. But as tasks become more complex, more and more work goes into preparing the output of one program to be the input of the next program. Users want to be able to do more in each program to avoid having to switch to another program to get their work done.

An example of the opposite of the Unix philosophy would be the Microsoft Office suite. There’s a great deal of redundancy between the programs. At a high level, Word is for word processing, Excel is the spreadsheet, Access is the database etc. But Excel has database features, Word has spreadsheet features, etc. You could argue that this is a terrible mess and that the suite of programs should be more orthogonal. But someone who spends most of her day in Excel doesn’t want to switch over to Access to do a query or open Word to format text. Office users are grateful for the redundancy.

Software applications do things they’re not good at for the same reason companies do things they’re not good at: to avoid transaction costs. Companies often pay employees more than they would have to pay contractors for the same work. Why? Because the total cost includes more than the money paid to contractors. It also includes the cost of evaluating vendors, writing contracts, etc. Having employees reduces transaction costs.

When you have to switch software applications, that’s a transaction cost. It may be less effort to stay inside one application and use it for something it does poorly than to switch to another tool. It may be less effort to learn how to do something awkwardly in a familiar application than to learn how to do it well in an unfamiliar application.

Companies expand or contract until they reach an equilibrium between bureaucracy costs and transaction costs. Technology can cause the equilibrium point to change over time. Decades ago it was efficient for organizations to become very large. Now transaction costs have lowered and organizations outsource more work.

Software applications may follow the pattern of corporations. The desire to get more work done in a single application leads to bloated applications, just as the desire to avoid transaction costs leads to bloated bureaucracies. But bloated bureaucracies face competition from lean start-ups and eventually shed some of their bloat or die. Bloated software may face similar competition from leaner applications. There are some signs that consumers are starting to appreciate software and devices that do less and do it well.

Related posts

The Unix Programming Environment

Joel Spolsky recommends the following books to self-taught programmers who apply to his company and need to fill in some gaps in their training.

The one that has me scratching my head is The Unix Programming Environment, first published in 1984. After listening to Joel’s podcast, I thumbed through my old copy of the book and thought “Man, I could never work like this.” Of course I could work like that, because I did, back around 1990. But the world has really changed since then.

I appreciate history and old books. I see the value in learning things you might not directly apply. But imagine telling twentysomething applicants to go read an operating system book that was written before they were born. Most would probably think you’re insane.

Update (16 November 2010): On second thought, I could see recommending that someone read The Unix Programming Environment these days even though technology has changed so much, but I’d still expect resistance.

Comparing the Unix and PowerShell pipelines

This is a blog post I’ve intended to write for some time now. I intended to come up with a great example, but I’ve decided to go ahead and publish it and let you come up with your own examples. Please share your examples in the comments.

One of the great strengths of Unix is the shell pipeline. Unix has thousands of little utilities that can be strung together via a pipeline. The output of one program can be the input to another. But in practice, things don’t go quite so smoothly. Suppose the conceptual pattern is

A | B | C

meaning the output of A goes to B, and the output of B goes to C. This is actually implemented as

A | <grubby text munging> | B | <grubby text munging> | C

because B doesn’t really take the output of A. There’s some manipulation going on to prepare the output of A as the input of B. Strip these characters from these columns, replace this pattern with this other pattern, etc. The key point is the Unix commands spit out text. Maybe at a high level you care about programs A, B, and C, but in between are calls to utilities like grep, sed, or awk to bridge the gaps between output formats and input formats.

The PowerShell pipeline is different because PowerShell commands spit out objects. For example, if the output of a PowerShell command is a date, then the command returns a .NET object representing a date, not a text string. The command may display a string on the command line, but that string is just a human-readable representation. But the string representation of an object is not the object. If the output is piped to another command, the latter command receives a .NET date object, not a string. This is the big idea behind PowerShell. Commands pass around objects, not strings. The grubby, error-prone text munging between commands goes away.

Not all problems go away just because commands pass around objects. For example, maybe one command outputs a COM object and another takes in a .NET object. This is where more PowerShell magic comes in. PowerShell does a lot of work behind the scenes to implicitly convert output types to input types when possible. This sort of magic makes me nervous when I’m programming. I like to know exactly what’s going on, especially when debugging. But when using a shell, magic can be awfully convenient.

Negative space in operating systems

Unix advocates often say Unix is great because it has all these powerful tools. And yet practically every Unix tool has been ported to Windows. So why not just run Unix tools on Windows so that you have access to both tool sets? Sounds reasonable, but hardly anyone does that. People either use Unix tools on Unix or Windows tools on Windows.

Part of the reason is compatibility. Not binary compatibility, but cultural compatibility. There’s a mental tax for shifting modes of thinking as you switch tools.

I think the reason why few people use Unix tools on Windows is a sort of negative space. Artists use the term negative space to discuss the importance of what is not in a work of art, such as the white space around a figure or the silence framing a melody.

Similarly, part of what makes an operating system culture is what is not there. You don’t have to worry about what’s not there. And not worrying about something frees up brain capacity to think about something else. Having too many options can be paralyzing. I think that even though people say they like Unix for what is there, they actually value what is not there.