Scalability and immediate appeal

Paul Graham argues that people take bad jobs for the same reasons they eat bad food. The advantages of both are immediately apparent: convenience and immediate satisfaction. The disadvantages take longer to realize. Bad jobs drag down your soul the way bad food drags down your body.

I first read Graham’s essay You Weren’t Meant to Have a Boss when he wrote it three years ago. I read it again this morning when I saw a link to it on Hacker News. I found his thesis less convincing this time around. But he makes two general points that I think I missed the first time.

  1. Watch out for things that are immediately appealing but harmful in the longer term.
  2. Watch out for being part of someone else’s scalability plans.

The first point is familiar advice, but worth being reminded of. The second point is more subtle.

Companies sell bad food for the same reason they offer bad jobs: it scales. It’s easy to create bland food and bland jobs on a large scale. Fresh food and creative jobs don’t scale so well.

When you choose to eat junk food, you more or less consciously choose convenience or immediate satisfaction over long-term benefit. But it may not be obvious when your range of options has been selected for scalability. For example, few students realize how much the educational system has been designed for the convenience of administrators. Being aware of an organization’s scalability needs can help you interact with it more intelligently.

More posts on scale

Sage Beginner’s Guide

I like books. Given a choice, I’d much rather read a book than online documentation. Typically a book speaks with one voice and has been more carefully edited. Unfortunately, it can be hard to find books on specialized software. That’s why I was glad to hear there’s a book on Sage, a project that integrates many Python libraries for mathematical computing into a single context.

Craig Finch’s book Sage Beginner’s Guide provides an easy-to-read overview of Sage. The book is filled with examples. In fact, every topic is introduced by an example. Explanations follow the examples in sections entitled “What just happened?”. Follow-up exercises are provided to solidify the material after the example and explanation.

Someone could begin using Sage without knowing Python. They could think of Sage as an open source Mathematica-like application. But one of the strengths of Sage is that its underlying language is Python. And the Sage Beginner’s Guide has two chapters devoted to the Python language, one basic and one advanced.

Finch’s book is primarily focused on Sage as a whole, not the Python components it integrates. However, it does point out the component libraries that provide certain functionality when either the constituent library conflicts with Sage or can be used to refine Sage functionality.

Sage can be challenging to install. It is not yet directly supported on Windows; the recommended way to use Sage on Windows is to download a Linux virtual machine with Sage installed. I was able to install Sage on Ubuntu but not yet on OS X. However, you can try out Sage without installing it by using Sage online notebooks.

I don’t have as much experience with Sage as with some of its components. As far as I can tell, it’s easy to take your experience from component libraries—say NumPy—and bring it over to Sage. It would be harder to take functionality you discovered while using Sage and use it outside of Sage since, though that is to be expected since part of the value Sage adds is smoothing over the peculiarities of each component library.

How to design a quiet room

How would you design a quiet study room? If you know a little about acoustics you might think to avoid hard floors, hard surfaces, parallel walls, and large open spaces. The reading room of the Life Science Library at the University of Texas does the opposite. And yet it is wonderfully quiet.

The room is basically a big box, maybe 100 ft long. The slightest noise reverberates throughout the room. But because the room is so live, the people inside are very quiet.

Maybe C++ hasn’t jumped the shark after all

A couple years ago I wrote a blog post Has C++ jumped the shark? I wondered how many people would care about the new C++ standard by the time it came out. I doubted that it would matter much to me personally.

… if something is hard to do in C++, I just don’t use C++. I don’t see a new version of C++ changing my decisions about what language to use for various tasks.

The new standard is out now, and I find it more interesting that I thought I would have. I’d still rather not use C++ for tasks that are easier in another language I know, but sometimes I don’t have that option.

When I heard about plans to add lambdas and closures to C++ I commented here that “it seems odd to add these features to C++.”  Now I think lambdas (anonymous functions) and closures may be the most important new features for my use.

One of my favorite features in Python is the ability to define functions on the fly. If I’m writing a function and need to create a new function to pass as an argument, then I can define that function on the spot. By contrast, C++ has not let you define a function inside a function. Now you can.

(You still cannot define a named function inside a C++ function. However, you can create anonymous functions inside another function. If you save that anonymous function to a variable, then you’ve effectively created a named function.)

Here are a couple examples of how it can be very convenient to define little functions on the fly. First, optimization software typically provides methods for minimizing functions. If you want to maximize a f(x), you define a new function g(x) = -f(x) and minimize g. Second, root-finding software typically solves equations of the form f(x) = 0. If you want to solve f(x) = c, you create a new function g(x) = f(x) – c and find where g is zero. In both cases, the function g is so trivial that there’s no need to give it a name. And more importantly, you’d rather define this auxiliary function right where you need it than interrupt your context by defining it outside.

Anonymous functions can also have associated data obtained from the context where they’re defined. That’s called a closure because it encloses some context with the function. This is very often necessary in mathematical software development. For example, say you have a function of two variables f(x, y) and you want to integrate f with respect to x, holding y constant. You might then think of f as a function of one variable with one constant, but the compiler sees it as a function of two variables and will not let you pass it to an integration routine expecting a function of one variable. One way to solve this problem is to use function objects. This isn’t difficult, but it requires a lot of ceremony compared to using closures.

For specifics of how to use lambdas and closures in C++, see Ajay Vijayvargiya’s article Explicating the new C++ standard (C++0x), and its implementation in VC10.

More C++ posts

Golden Carnival of Mathematics

Welcome to the 79th edition of the Carnival of Mathematics. By tradition, each edition begins with a bit of trivia about the number of the carnival.

Gold has atomic number 79, so this is the golden edition. There is an older tradition of calling 25th things silver, 50th things gold, etc. However, I propose switching to atomic numbers as this system is simpler and easier to look up. :)

Not only is 79 a prime number, it belongs to numerous categories of prime numbers:

  • Cousin
  • Fortunate
  • Lucky
  • Happy
  • Gaussian
  • Higgs
  • Kynea
  • Permutable
  • Pillai
  • Regular
  • Sexy

(Definitions here)

And now on to the posts.

History

Fëanor at JOST A MON presents Cipher, a history of zero through the Middle Ages.

Historical wanderings

Katie Sorene takes us on a stroll through ancient labyrinths on her blog Travel Blog – Tripbase.

Guillermo Bautista strolls through The Seven Bridges of Königsberg at Mathematics and Multimedia.

Wandering

Next we wander around a grid of dominoes. Jim Wilder presents The Domino Effect: An Elementary Approach to the Kruskal Count. Starting from this elementary post, you can wander into an investigation of coupling methods for Markov chains.

Teaching

Peter Rowlett discusses whether there is a generational gap between professors and students on his blog Travels in a Mathematical World.

Alexander Bogomolny from CTK Insights presents An Olympiad Problem for a Kindergarten Investigation. He gives a problem that is simple to describe and that has a simple but sophisticated solution.

Media

Mike Croucher shares a couple videos simulating pendulum waves, one in Maple and one in Mathematica.

Peter Rowlett explains why he supports Relatively Prime, Samuel Hansen’s Kickstarter project. Samuel Hansen has produced two mathematical podcasts and is now raising donations to fund the creation of a series of mathematical documentaries.

Computing

One of the most fundamental questions you can ask about a computer program is whether it stops. This may appear to be an easy task, yet there is a three-line program that no one knows whether it always terminates. Fëanor shared a link to Brian Hayes‘ commentary Don’t try to read this proof! on a recent proposed proof of the Collatz conjecture.

Applications often involve matrices that are too large to store directly. For an introduction to how large matrices are represented in computer memory, see Storing Banded Matrices for Speed from The NAG Blog. The post promises to be the first in a series.

Next

You can submit posts for the next Carnival of Mathematics here. Also, you can keep up with Carnival of Mathematics new and other mathematical tidbits by following CarnivalOfMath on Twitter.

Related upcoming carnivals

Sorting

From Knuth’s book Sorting and Searching:

Computer manufacturers of the 1960’s estimated that more than 25 percent of the running time of their computers was spent on sorting, when all their customers were taken into account. In fact, there were many installations in which the task of sorting was responsible for more than half of the computing time. From these statistics we may conclude that either

  1. there are many important applications of sorting, or
  2. many people sort when they shouldn’t, or
  3. inefficient sorting algorithms have been in common use.

Computing has changed since the 1960’s, but not so much that sorting has gone from being extraordinarily important to unimportant.

From the world, to the world

Edmund Harriss describes an interesting pattern he sees in mathematics and constructivist art in his interview on Strongly Connected Components. For most of history, mathematics and art have been fairly direct abstractions of physical reality. Then in the 20th century both became more and more abstract. But then a sort of reversal took place. After reaching heights of abstraction—Harriss cites Gödel and Picasso as examples—both mathematics and art began to apply abstractions back to the physical world.

… starting from clearly abstract structures and building something real from the abstract rather than abstracting something from the real. … You have models that you can then apply to the world rather than models you took from the world.

Stephen Wolfram has an analogous idea about computer programs. Until now we have written programs to solve specific problems. Wolfram suggests we reverse this and explore the space of all possible computer programs. As he demonstrates in his magnum opus, simple programs can have surprisingly complex behavior. We may be able to find some relatively small but useful programs that way.

As Edmund Harriss alludes, people have successfully applied very abstract mathematics, mathematics developed with no physical application in mind, to physical problems. I’m more skeptical of Stephen Wolfram’s proposal.

Suppose you find a program that appears to solve some problem, such as optimally controlling a nuclear reactor. How do you really know what it does? You didn’t write it; you found it. It wasn’t designed to solve the problem; you discovered that it (apparently) solves the problem. Wolfram is optimistic that we could discover programs that we might never be able to write. But a program that powerful would likely also be impossible to thoroughly understand.When Wolfram says “Look what interesting behavior tiny programs can have!” I think “Look how hard it can be to understand arbitrary programs even when they’re small!”

Computer science might not be that helpful in determining what a found program does. It is theoretically impossible to write a program that can always determine whether another program stops. We would have to study the program empirically. This process would be more like squeezing an extract from some plant root and testing its medicinal properties than designing drugs.

Related post: What does this code do?