One life-lesson from math is that sometimes you can solve a problem without doing what the problem at first seems to require. I’ll give an elementary example and a more advanced example.

The first example is finding remainders. What is the remainder when 5,000,070,004 is divided by 9? At first it may seem that you need to divide 5,000,070,004 by 9, but you don’t. You weren’t asked the quotient, only the remainder, which in this case you can do directly. By casting out nines, you can quickly see the remainder is 7.

The second example is definite integrals. The usual procedure for computing definite integrals is to first find an indefinite integral (i.e. anti-derivative) and take the difference of its values at the two end points. But sometimes it’s possible to find the definite integral directly, even when you couldn’t first find the indefinite integral. Maybe you can evaluate the definite integral by symmetry, or a probability argument, or by contour integration, or some other trick.

Contour integration is an interesting example because you *don’t* do what you might think you need to — i.e. find an indefinite integral — but you *do* have to do something you might never imagine doing before you’ve seen the trick, i.e. convert an integral over the real line to an integral in the complex plane to make it simpler!

What are some more examples, mathematical or not, of solving a problem without doing something that at first seems necessary?

**Related posts**:

The power of parity

How has math changed your view of the world?

Maybe you only need it because you have it

The most obvious is in my opinion the summation of the first n terms of an arithmetic sequence

E.g. S = 1 + 2 + … + 2713

2×S = (1 + 2713) + (2 + 2712) + … + (2713+1) = 2713×(2713+1)

So S = 2713×(2713+1)/2 = 3681541

So there is no need to waste one’s time with 2712 additions.

I wrote about a similar experience I had computing an integral: in this case, applying integration by parts made the problem

lookmore complicated, but actually made it easier to solve.I suspect this sort of thing happens a lot in computing integrals (and other cook-bookery type math) where no one algorithm works.

Your example of a probability argument rang a bell. One of my graduate professors told the story of stumping colleagues in other branches of math with a challenge to prove a somewhat hairy-looking equation involving a complicated sum on one side and an annoying integral on the other. In the context of a probability class, the proof was by inspection: the left hand side was the probability that the number of events prior to time T in a poisson process with rate L is at least N, and the right-hand side was the probability that an Erlang(N,L) random variable takes a value less than or equal to T.

When I was a kid, I discovered a trick for determining whether fractions are reducible. If…

(denominator-numerator)/denominator

…is irreducible, then so is the original fraction, and if it is reducible, so is the original fraction. Occasionally, it’s easier to eyeball whether the expression above is reducible or not than dealing with what you’re initially given.

9 has more interesting properties that fall into this category: http://blog.fogus.me/2010/10/30/09/

Some people are really good at knowing, as if via proprioception, where the cardinal directions are. This is a challenge for me. One afternoon I was walking with a friend, and asked him which direction was north — I was striving to come up with what I knew about the surroundings we were in to get hints on directions. Look, he said, there’s the sun.

A favourite from this blog: find inv(A)*b without inverting matrix A.

Rejection sampling: draw a sample from a distribution defined by a probability density function, without finding its inverse cumulative distribution.

Homomorphic encryption: compute with encrypted data without first decrypting it: http://www.americanscientist.org/issues/pub/2012/5/alice-and-bob-in-cipherspace/1

Maximum likelihood estimation of exponential family models by stochastic approximation: no need to approximate the likelihood being optimized, just go straight for its derivatives (easier to estimate).

A nice class of things…I look forward to seeing more!

If you want to know the integral of a product of two Gaussians, all you need is one evaluation of a third Gaussian.

There are a lot of examples I can think of. They share in common the fact that one can reason about a system or problem rather than within it. In chess, for example, there are blocked positions in which it is obvious to a human that neither side can break through, but where a computer just using brute force search would never “understand” this structural fact. In mathematical logic, results such as Goedel’s theorem (or equivalently, in computer science the undecidability of the halting problem) result from reasoning about a formal system rather than just following rules within it. More generally, all of mathematics is about being able to prove something without going through all possibilities: something as simple as showing that there are infinitely many primes is fundamental to the whole enterprise (something that I have found many non-mathematicians cannot grasp, because they think in terms of having to show every case, which can’t be done of course, and therefore they think it’s somehow an open question, therefore, whether there could be a final prime).

Solving a differential equation by guessing a solution and then showing that it works. If the problem has uniqueness of solutions, you are done.

Josh Braun’s response is equivalent to Euclidean Algorithm; another way of saying what he said is that it lets you find common factors without prime factorizing.

For me the most miraculous items in this category seem to be with expected value, where first of all linearity lets you dispense with all kinds of information about covariances, and second of all you can sum a whole bunch of infinite series that come up in expected value calculations by solving a simple linear equation instead.

@Joshua Zucker: I knew someone had to have thought o it before. Glad to put a name to the phenomenon.

One example is finding the center of mass of a semicircle (radius = 1). That is, the distance, call it x, from the diameter to the COM. This appears to require calculus.

However: If you rotate the semi-circle about its diameter, it forms a sphere. Pappus’ Theorem then applies:

(area) (distance COM travels) = (volume of the solid swept out)

(pi/2)(2pi*x) = (4pi/3)

From which x = 4/(3pi)