Francois Pinard comparing the R programming language to smoking:

Using R is a bit akin to smoking. The beginning is difficult, one may get headaches and even gag the first few times. But in the long run, it becomes pleasurable and even addictive. Yet, deep down, for those willing to be honest, there is something not fully healthy in it.

I’ve never gotten to the point that I would call using R pleasurable.

There’s a motto in agile software development that says “You aren’t going to need it,” YAGNI. The idea is that when you’re tempted to write some code based on speculation of future use, don’t do it. (I go into this a little deeper here.)

Although “you aren’t going to need it” is a good principle for writing code, the analogous principle “you aren’t going to need to understand it” doesn’t seem to hold. When I try to avoid understanding things in detail, I end up needing to understand the details anyway and wished I’d done so sooner. This has been my experience in math research, software development, and project management.

Obviously you can’t understand everything about everything, and some things you clearly need to know well. In the fuzzy area in between, especially when I’ve said to myself “I hope I don’t have to dig into this,” I’ve often regretted postponing the dig.

For some ridiculous reason, to which, however, I’ve no desire to be disloyal, Some person in authority, I don’t know who, very likely the Astronomer Royal, Has decided that, although for such a beastly month as February, twenty-eight days as a general rule are plenty, One year in every four, its days shall be reckoned as nine and twenty.

Contrary to semi-educated belief, NP-complete problems are not necessarily intractable. Often such problems are intractable, but not always. If a problem is NP-complete, this means that there is no polynomial-time algorithm for solving the problem

for the worst case,

as the problem size grows,

finding exact solutions,

with certainty,

as far as anyone knows.

One way out of this (#5) is to show how to solve NP-complete problems efficiently. That may not be possible, and it hasn’t been possible so far, so we’ll not worry about this one. But that still leaves four options:

special cases

small problem sizes

approximate solutions

probabilistic solutions.

For example, the Traveling Salesman problem is NP-complete. But it’s easy to find solutions for a small number of cities by brute force. And even for a moderate number of cities, people routinely find solutions; there’s even an iPhone app for finding solutions. For larger problems, there are algorithms that yield near-optimal solutions with high probability.

Similar remarks hold for other kinds of problems deemed intractable. For example, some say you can’t find the roots of a 5th degree polynomial. Isn’t that a theorem? No. There’s a theorem that says you cannot find

exact solutions,

in general,

using a finite sequence of certain mathematical operations.

So there are three loop holes here:

approximate solutions,

special cases, and

more general operations.

If you only want to find solutions to 100 decimal places, for example, that’s doable.

Most integrals cannot be computed

by a typical calculus student,

exactly,

with certainty,

in terms of elementary functions.

But often integrals can be computed by some combination of

more sophisticated exact techniques,

numerical approximation,

randomized algorithms, and

a broader class of functions.

For example, some say that the integral of exp(-x^{2}) cannot be computed. Of course it can, though there is not a finite combination of elementary functions that gives the exact integral. You can compute the integral exactly in terms of the error function erf(x), or compute it numerically to any desired accuracy. You could even approximate the integral with a Monte Carlo method, though there’s no point in using that approach here. For many high-dimensional integration problems, some sort of Monte Carlo is the only practical approach.

Maybe you need to solve a problem for which none of the loop holes given here apply. That’s often the case. I’m not saying that there’s always a way out, only that sometimes there is. Sometimes problems thought to be intractable are not.

What counts, I found, is not what you cover but what you uncover. Covering subjects in a class can be a boring exercise, and students feel it. Uncovering the laws of physics and making them see through the equations, on the other hand, demonstrates the process of discovery, with all its newness and excitement, and students love being part of it.

To stretch this far … modernity must necessarily be culturally thin. … It is a generic culture, this culture of the television age, of asphalt, advertising, uniformity, and waste. And those who feed on it, those who live by it, become generic people who also are thin, who stretch wide and belong to nowhere in particular.

“Audible confetti” is a striking metaphor for distracting noise. Here’s the context where I ran across the phrase this morning:

The public sphere, dominated as it is by the omnipresence of bureaucracy, systems of manufacturing, the machinery of capitalism, and the audible confetti spewing out of countless radios and televisions, makes it virtually impossible to think that in this world God has any meaningful place.

Julia is a new programming language for scientific computing. From the Julia site:

Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. …

I just started playing around with it. I didn’t see functions for non-uniform random number generation so I wrote some as a way to get started.

[Update: there are non-uniform random number generators in Julia, but they have not been added to the documentation yet. See details in this comment.]

Here’s a random number generator for normal (Gaussian) random values:

## return a random sample from a normal (Gaussian) distribution
function rand_normal(mean, stdev)
if stdev <= 0.0
error("standard deviation must be positive")
end
u1 = rand()
u2 = rand()
r = sqrt( -2.0*log(u1) )
theta = 2.0*pi*u2
mean + stdev*r*sin(theta)
end

From this you can see Julia is a low-ceremony language: Python-like syntax, you can call common mathematical functions without having to do anything special, etc. You can have explicit return statements, but the preferred style seems to be to let the last line of the function be the implicit return statement.

My most common mistake so far has been forgetting to close code blocks with end; Julia’s syntax is similar enough to Python that I suppose I think indentation should be sufficient.

I’ve written random number generators for the following probability distributions:

I’ve been trying out Eshell lately. It’s a shell implemented in Emacs Lisp. Here I’ll mention a few features I’ve found interesting.

The command M-x shell lets you run a shell inside Emacs. This is quite handy, but it runs a different shell on each platform. For example, it might pull up bash on Linux and cmd.exe on Windows. Eshell provides a consistent shell across operating systems.

Eshell is one way to have a bash-like shell on Windows, so it’s an alternative to Cygwin. I haven’t used Eshell or Cygwin extensively, but Cygwin feels like some alien thing awkwardly grafted on to my Windows machine. Eshell feels more like cmd.exe with bash features added.

Of course Eshell is integrated with Emacs. One example of that is being able to run Emacs commands as shell commands. For example, you can run dired from the shell to open a directory in the Emacs directory editor.

Eshell has pseudo-devices. These act like Unix devices, but they’re implemented in Emacs and therefore run on non-Unix platforms. For example, you could direct command output to /dev/clip to send it to the system clipboard or to /dev/kill to add it to the Emacs kill ring.

And now for a little silliness. Here’s an excerpt from Tron featuring Eshell.

It goes by very fast, but here’s the frame that shows Eshell:

The young lady who took my order at lunch today had a strong Texas accent. You might think this would be nothing unusual—I live in Texas—but it surprised me.

Don’t Texans have Texas accents? Strictly speaking, yes, people in Texas speak like people in Texas. However, most people in Texas do not have the stereotypical Texas accent.

By the way, I don’t own a cowboy hat, a pair of boots, or an oil well. Some Texans do, but most don’t.

My favorite numerical analysis book is Numerical Methods that Work. In the hardcover version of the book, the title was impressed in the cover and colored in silver letters. Before the last word of the title, there was another word impressed but not colored in. You had to look at the cover carefully to see that it actually says “Numerical methods that usually work.”

First published in 1970, the book’s specific references to computing power are amusingly dated. But book’s advice is timeless.

The chapter entitled “The care and treatment of singularities” gives several approaches for integrating functions that have a singularity. This post will elaborate on the simplest example from that chapter.

Suppose you want to compute the following integral.

The integrand is infinite at 0. There are a couple common hacks to get around this problem, and neither works well. One is simply to re-define the integrand so that it has some finite value at 0. That keeps the integration program from complaining about a division by zero, but it doesn’t help evaluate the integral accurately.

Another hack is to change the lower limit of integration from 0 to something a little larger. OK, but how much larger? And even if you replace 0 with something small, you still have the problem that your integrand is nearly singular near 0. Simple integration routines assume integrands are polynomial-like, and this integrand is not polynomial-like near zero.

The way to compute this integral is to subtract off the singularity. Since sin(x) is asymptotically equal to x as x goes to 0, √sin(x) is asymptotically √x. So if we subtract 1/√x, we’re left with a bounded integrand, and one that is considerably more polynomial-like than the one we started with. We then integrate by hand what we subtracted off. That is, we replace our original integral with the following pair of integrals.

Compute the first numerically and the second analytically. Simpson’s rule with intervals of size 1/64 gives 0.03480535 for the first integral, which is correct to seven decimal places. The second integral is simply 2, so the total is 2.03480535.

Now lets look back at how our two hacks would have worked. Suppose we define our integrand f(x) to be 0 at x = 0. Then applying Simpson’s rule with step size 1/64 would give an integral of 3.8775, which is completely wrong. Bringing the step size down to 2^-20 makes things worse: the integral would then be 4.036. What if instead of defining the integrand to be 0 at 0, we define it to be some large value, say 10^6? In that case Simpson’s rule with step size 1/64 returns 5212.2 as the integral. Lowering the step size to 2^-20 improves the integral to 4.351, but still every figure is wrong since the integral is approximately 2.

What if we’d replaced 0 with 0.000001 and used Simpson’s rule on the original integral? That way we’d avoid the problem of having to define the integrand at 0. Then we’d have two sources of error. First the error from not integrating from 0 to 0.000001. That error is 0.002, which is perhaps larger than you’d expect: the change to the integral is 2000x larger than the change to the lower limit of integration. But this problem is insignificant compared to the next.

The more serious problem is applying Simpson’s rule to the integral even if we avoid 0 as the lower limit of integration. We still have a vertical asymptote, even if our limits of integration avoid the very singularity, and no polynomial ever had a vertical asymptote. With such decidedly non-polynomial behavior, we’d expect Simpsons rule and its ilk to perform poorly.

Indeed that’s what happens. With step size 1/64, Simpson’s rule gives a value of 9.086 for the integral, which is completely wrong since we know the integral is roughly 2.0348. Even with step size 2^-20, i.e. over a million function evaluations, Simpson’s rule gives 4.0328 for the integral.

Incidentally, if we had simply approximated our integrand 1/√sin(x) with 1/√x, we would have estimated the integral as 2.0 and been correct to two significant figures, while Simpson’s rule applied naively gets no correct figures. On the other hand, Simpson’s rule applied cleverly (i.e. by subtracting off the singularity) gives eight correct figures with little effort. As I concluded here, a crude method cleverly applied beats a clever technique crudely applied.

I appreciate spare design, but the new Windows logo is just boring.

Here’s the rationale for the new logo according to The Windows Blog:

But if you look back to the origins of the logo you see that it really was meant to be a window. “Windows” really is a beautiful metaphor for computing and with the new logo we wanted to celebrate the idea of a window, in perspective. Microsoft and Windows are all about putting technology in people’s hands to empower them to find their own perspectives. And that is what the new logo was meant to be. We did less of a re-design and more to return it to its original meaning and bringing Windows back to its roots – reimagining the Windows logo as just that – a window.

When I hear someone I respect rave about a software tool I’ll take a look at it. When I do, it often leaves me cold and I wonder how they could think it’s so great. Is this person just easily impressed? I think I understand now why this happens.

Tools are used in a context, and that context is often missing from a discussion. Software developers talk more about their tools than how they string their tools together. In hardware terms, they talk about components but not the backplane that holds the components together. To understand how someone works, you need to know his or her backplane and not just a list of tools.

Tom Ryder had an interesting series of blog posts entitled Unix as IDE. He brings up some ideas that I believe would be of interest to people who don’t care about Unix. He’s talking about backplanes.

When you use Visual Studio or Eclipse as an IDE, that software is your backplane. It integrates your software development tasks. Many developers see their IDE as a godsend and couldn’t imaging working without it. Others disagree. It’s not that some people like an integrated development environment (IDE) and others prefer a disintegrated development environment (DDE). Some developers, like Ryder, have a different way of integrating activities. He explains that he’s not trying to condemn IDEs:

I don’t think IDEs are bad; I think they’re brilliant … In particular, I’m not going to try to convince you to scrap your hard-won Eclipse or Microsoft Visual Studio knowledge for the sometimes esoteric world of the command line. All I want to do is show you what we’re doing on the other side of the fence.

I found it interesting that although Ryder is fond of Vim, he’s not advocating Vim per se as an IDE. He sees Unix itself as his backplane and Vim as a component.

… the two grand old text editors Emacs and Vi (GNU Emacs and Vim) have such active communities developing plugins to make them support pretty much any kind of editing task. There are plugins to do pretty much anything you could really want to do in programming in both editors … the developers concerned are trying to make these text editors into IDEs in their own right. There are posts about never needing to leave Vim, or never needing to leave Emacs. But I think that trying to shoehorn Vim or Emacs into becoming something that it’s not isn’t quite thinking about the problem in the right way. [emphasis added]

I agree. I like Emacs, and I appreciate using one program where I used to use several. However, you quickly reach diminishing return when you try to do everything in Emacs. Ryder goes on later to discuss Vim and how it fits into his ecosystem.

Part of the reason Vim is thought of as a toy or relic by a lot of programmers used to GUI-based IDEs is its being seen as just a tool for editing files on servers, rather than a very capable editing component for the shell in its own right. Its own built-in features being so composable with external tools on Unix-friendly systems makes it into a text editing powerhouse that sometimes surprises even experienced users. [emphasis added]

Many developers live in Visual Studio, but what happens when Visual Studio doesn’t integrate everything you need to do? Some would say that Windows is their backplane just as Unix is Ryder’s backplane. They use the file system, the clipboard, etc. But if you want to be more automated, you have the option of writing Visual Studio plugins or writing scripts to glue things together using COM or PowerShell. The transition between doing something manually and automatically is steeper on Windows.

Here’s an idea to chew on. Hayek argues that you either have to serve a market or a boss, and that the former is preferable.

Man in a complex society can have no choice but between adjusting himself to what to him must seem the blind forces of the social process and obeying the orders of a superior. So long as he knows only the hard discipline of the market, he may well think the direction by some other intelligent human brain preferable; but, when he tries it, he soon discovers that the former still leaves him at least some choice, while the latter leaves him none, and that it is better to have a choice between several unpleasant alternatives than being coerced into one.