Integration by parts says
The first question students ask is What do I make u and what do I make dv? I used to tell my students to set u equal to the part you’d rather differentiate and dv equal to the part you’d rather integrate. That’s not bad advice, but it begs the question “How do I know what I want to differentiate and what I want to integrate?” Until you have some experience and intuition, that’s hard to answer.
Here’s a good rule of thumb: set u to the first term you see on this list:
- inverse trig function
- algebraic function
- trig function
This rule doesn’t cover everything — no rule can — but it works remarkably well. I don’t remember just where I found this; I believe it was in an article somewhere. I’m fairly certain I’ve never seen it in a calculus textbook.
Update: I found the reference for the rule above. “A Technique for Integration by Parts” by Herbert E. Kasube. American Mathematical Monthly, March 1983, page 210.
One of my professors once told me that you learn the fastest when you’re slightly confused. If you’re too confused, you’re likely to give up in frustration. But if you’re not confused at all, you’re either not learning or learning very slowly. Slight confusion is the optimal state where you’re holding unresolved ideas in your head and making connections.
Confusion shows you’re thinking deeply enough to be confused. It takes effort to be confused rather than complacent.
Sometimes a muse will stir confusion in your mind on a previously settled topic. You can try dismiss the muse as you would a housefly, or you can pursue a resolution to the confusion. You can find relief from your confusion by apathy or by hard work. If you choose to push through the confusion to resolution, your confidence will initially decrease even as your understanding increases. If you’re vocal about your confusion, you will invite ridicule. Confusion takes courage.
My daughters and I went to a CoinStar machine last night to convert a huge bowl of change into an Amazon gift card. The question came up of the probability of the total coming out an even dollar amount. My oldest said this had probability 1/100. My second oldest asked why we could talk about probabilities at all since the bowl just contained whatever it contained. Without knowing it, they represented the two major schools of probability interpretation, subjectivist and frequentist.
Subjectivists use probabilities to represent degrees of human belief or uncertainty, as well as frequencies. A subjectivist would argue that while content of the bowl is not a random variable, our knowledge of that content is.
Frequentists shy away from such psychological interpretations of probability. A frequentist might come to the same 1/100 probability estimate but would interpret the statement as follows. “If we were to randomly fill the bowl with coins many times and take it each time to the CoinStar machine, in about 1 out of 100 trips on average we would end up with a whole dollar amount.”
To look at another example, suppose an Oxford librarian discovered a manuscript and suspected it of being a previously unknown Shakespeare play. He might analyze word choice frequencies to come up with a probability that the play was indeed written by Shakespeare. He might conclude, for example, that there is an 80% chance that The Bard wrote the play. This would be a subjective probability since the 80% figure is a statement about the librarian’s confidence, not about the manuscript itself.
A strict frequentist would object that this is all nonsense: either Shakespeare wrote the manuscript or he didn’t, so there’s no probability involved. He might try to salvage the probability interpretation by speculating about what would happen if we were to discover and analyze an infinite number of similar manuscripts.
Here’s some Python code to create a sitemap in the format specified by sitemaps.org and read by search engines. Download the file sitemapmaker.txt and change the extension from
url variable in the script before running it or else you’ll point search engines to my web site rather than yours. Also, edit the file
extensions_to_keep variable if you want to index any file types besides HTML and PDF.
Copy the file
sitemapmaker.py to the directory on your computer where you have your files. Run the script and direct its output to a file,
sitemapmaker.py > sitemap.xml. See sitemaps.org for instructions on how to let search engines know about your sitemap.
This code assumes all the files to index in your sitemap are in one directory, the directory you run the script from. It also assumes the timestamps on your computer match those on your web server. Optional fields are left out of the sitemap.
There’s only one symbol in statistics, p. The same variable represents everything. You just get used to it and figure out which p is which from context. It reminds me of George Forman naming all five of his sons George. Here’s an example I ran across recently where p represents four different functions in one equation:
p(θ | x) = p(x | θ) p(θ) / p(x)
Usually this is done with no explanation, but in the example above the author explains that he’s denoting entirely different functions with the same symbol in order to avoid the “clumsy notation” that being explicit would require.
Sometimes the overloading of the 16th letter of the English alphabet becomes just too much and statisticians break down and use the Greek counterpart, π. So then to make matters even more confusing to the uninitiated, π can be a variable or a function.
I added two technical articles to my personal web site this evening.
Step size for numerical differential equations is a one-page set of notes on how to select the optimal step size when numerically solving ODEs (ordinary differential equations).
Separation of convex sets in linear topological spaces is a highly technical article I wrote a long time ago. I decided to put it on the web in case someone finds it useful. It’s a fairly obscure topic, but this paper covers it thoroughly.
I maintain two pages for articles, one for informal notes and one for academic publications. I went back and added my old PDE papers to my academic publications page this evening.
Update: Posted an old PDE paper, Distributed Systems of PDE in Hilbert Space.
Mark Lentczner has posted a periodic table of Perl operators. The table shows Perl 6 in all its Byzantine glory. If you work in the language constantly and enjoy the terse syntax optimized for experts, you’ll love Perl 6. But if you’re already having difficulty holding Perl in your head, this periodic table might be the straw that breaks the camel’s back.
In 1986, Lawrence Leemis published a paper containing a diagram of 43 probability distribution families. The diagram summaries connections between the distributions with arrows: chi-squared is a special case of gamma, Poisson is a limiting case of binomials, the ratio of two standard normals is a Cauchy, etc. It’s a very handy reference, a sort of periodic table for statisticians. His diagram and variations have appeared in several text books over the last 20 years, such as Casella and Berger.
Now Leemis has published an expanded version containing 76 probability distributions. The paper is in the February 2008 issue of American Statistician and is also available online. The heart of the article is the diagram on page 3.
Update: See clickable distribution diagram
VectorMagic is a free online tool from the Standford University AI lab for converting bitmap images to vector formats. The image below shows an example of what you might use this tool for.
I just heard about the software and tried it out with a fairly complex image, a sample of Japanese calligraphy, and it did a beautiful job converting the image from bitmap to EPS (Encapsulated PostScript).
The software supports JPEG, GIF, PNG, BMP, and TIFF input. It supports EPS, SVG, and PNG output.
(In case you’re not familiar with graphic formats, a bitmap image is a matrix of dots. The format records what color each dot is. That works fine when an image is displayed at its original resolution. But if you make the image bigger, you just get bigger dots and things look grainy. A vector format stores the formulas for the curves that make up the image, not the dots, and computes the dots when it’s time to display the image. If you make an image larger, it computes new dots according to the formulas. Software for making bitmaps smaller is common. Software that does a good job of making bitmaps larger is rare.)
Update: VectorMagic has moved to a new domain. I’ve corrected the link above.
In the book Universal Principles of Design by William Lidwell, Kritina Holden, and Jill Butler, the authors have this to say about the flexibility-usability trade-off.
It is a common assumption that designs should always be made as flexible as possible. However, flexibility has real costs in terms of complexity, usability, time, and money; it generally pays dividends only when an audience cannot clearly anticipate its future needs.
Good design is not about making everything flexible. It’s about making the right things flexible, and making other things more rigid. Good design could be described in terms of what is rigid as much as in terms of what is flexible. The art is knowing which things to make flexible, while casting other things in stone.
* * *
Agile software forecasting
Interaction design guru Alan Cooper gave a presentation recently entitled An Insurgency of Quality. As part of his talk, he explains why programmers cannot be managed. Traditional management has an industrial age mindset, while software development is a post-industrial craft. That mismatch explains a great deal. For example, industrial workers respect authority, but programmers respect competence.
According to Cooper, the leader of a group of programmers should be a facilitator, not a manager. Johanna Rothman in her interview on the Pragmatic Programmer podcast elaborates on this same view. The manager’s job is to remove obstacles to productivity — acquire resources, provide protection from interruptions and distractions, etc. — rather than to manage in the industrial sense.
This morning I posted some notes on orthogonal polynomials and Gaussian quadrature.
“Orthogonal” just means perpendicular. So how can two polynomials be perpendicular to each other? In geometry, two vectors are perpendicular if and only if their dot product of their coordinates is zero. In more general settings, two things are said to be orthogonal if their inner product (generalization of dot product) is zero. So what was a theorem in basic geometry is taken as a definition in other settings. Typically mathematicians say “orthogonal” rather than “perpendicular.” The basic idea of lines meeting at right angles acts as a reliable guide to intuition in more general settings.
Two polynomials are orthogonal if their inner product is zero. You can define an inner product for two functions by integrating their product, sometimes with a weighting function.
Orthogonal polynomials have remarkable properties that are easy to prove. Last week I posted some notes on Chebyshev polynomials. The notes posted today include Chebyshev polynomials as a special case and focus on the application of orthogonal polynomials to quadrature. (“Quadrature” is just an old-fashioned word for integration, usually applied to numerical integration in one dimension.) It turns out that every class of orthogonal polynomials corresponds to an integration rule.
Male honeybees are born from unfertilized eggs. Female honeybees are born from fertilized eggs. Therefore males have only a mother, but females have both a mother and a father.
Take a male honeybee and graph his ancestors. Let B(n) be the number of bees at the nth level of the family tree. At the first level of the tree is our male honeybee all by himself, so B(1) = 1. At the next level of our tree is his mother, all by herself, so B(2) = 1.
Pick one of the bees at level n of the tree. If this bee is male, he has a mother at level n+1, and a grandmother and grandfather at level n+2. If this bee is female, she has a mother and father at level n+1, and one grandfather and two grandmothers at level n+2. In either case, the number of grandparents is one more than the number of parents. Therefore B(n) + B(n+1) = B(n+2).
To summarize, B(1) = B(2) = 1, and B(n) + B(n+1) = B(n+2). These are the initial conditions and recurrence relation that define the Fibonacci numbers. Therefore the number of bees at level n of the tree equals F(n), the nth Fibonacci number.
This is a more realistic demonstration of Fibonacci numbers in nature than the oft-repeated rabbit problem.
Cyndi Mitchell in a talk from Rails Conf points out how “enterprise” in the phrase “enterprise software” has taken on the opposite of its customary meaning.
If you call a person enterprising, you have in mind someone who takes risks and accomplishes things. And “Enterprise” has been the name numerous ships, real and fictional, based on the bold, adventurous overtones of the name. But Cyndi Mitchell says when she thinks about enterprise software, the first words that come to mind are bloatware, incompetence, and corruption. I wouldn’t go quite that far, but words like “bureaucratic” and “rigid” would certainly be on my list. In any case, “enterprise” has a completely different connotation in “enterprise software” than in “USS Enterprise.”
Here are a couple podcasts introducing Windows developers to software development on the Macintosh.
Scott Hanselman: What’s it like for Mac Developers, an nterview with Steven Frank
.NET Rocks: Miguel de Icaza and Geoff Norton on Mono, mostly about .NET development on the Mac
Also, there are a lot of Mac-related talks on the GeekCruise podcast. The talks from January 2007 were directed at a general audience new to the Mac.
Hanselman’s podcast talks about some of the cultural difference between Microsoft and Apple customers. For example, Mac users update their OS more often and complain less about OS changes that break software.