Area of a triangle and its projections

Let S be the area of triangle T in three-dimensional space. Let AB, and C be area of the projections of T to the xyyz, and xz planes respectively. Then

S2A2B2C2.

There’s an elegant proof of this theorem here using differential forms. Below I’ll sketch a less elegant but more elementary proof.

You could prove the identity above by using the fact that the area of a triangle spanned by two vectors is half the length of their cross product. Suppose ab, and c are the locations of the three corners of T. Then

S2 = v2/2,

where

v = (a – b) × (c – b)

and by v2 we mean the dot product of v with itself.

Write out the components of v2 and you get three squared terms. Notice that when you set the x components to zero, i.e. project onto the yz plane, the first of the three terms is unchanged and the other two are zero. In other words, the first of the three terms of v2 is A2. A similar argument shows that the other two terms are B2 and C2.

 

Amistics

Neal Stephenson coins a useful word Amistics in his novel Seveneves:

… it was a question of Amistics, which was a term that had been coined ages ago by a Moiran anthropologist to talk about the choices that different cultures made as to which technologies they would, and would not, make part of their lives. The word went all the way back to the Amish people … All cultures did this, frequently without being consciously aware that they had made collective choices.

Related post by Kevin Kelly: Amish Hackers

How many ways can you tile a chessboard with dominoes?

Suppose you have an n by m chessboard. How many ways can you cover the chessboard with dominoes?

It turns out there’s a remarkable closed-form solution:

 

\sqrt{\prod_{k=1}^m \prod_{\ell=1}^n \left( 2\cos\left(\frac{\pi k}{m+1} \right) + 2i \cos\left(\frac{\pi \ell}{n+1} \right) \right)}

 

Here are some questions you may have.

But what if n and m are both odd? You can’t tile such a board with dominoes.

Yes, in that case the formula evaluates to zero.

Do you need an absolute value somewhere? Or a floor or ceiling?

No. It looks like the double product could be a general complex number, but it’s real. In fact, it’s always the square of an integer.

Update: Apparently the formula does need an absolute value, not to turn complex values into real values but to turn negative integers into positive ones. See Aaron Meurer’s example below.

Does it work numerically?

Apparently so. If you evaluated the product in a package that could symbolically manipulate the cosines, the result would be exact. In floating point it cannot be, but at least in my experiments the result is correct when rounded to the nearest integer. For example, there are 12,988,816 ways to tile a standard 8 by 8 chessboard with dominoes, and the following python script returns 12988816.0. For sufficiently large arguments the result will not always round to the correct answer, but for moderate-sized arguments it should.

        
        from numpy import pi, cos, sqrt
        
        def num_tilings(m, n):
            prod = 1
            for k in range(1, m+1):
                for l in range(1, n+1):
                    prod *= 2*cos(pi*k/(m+1)) + 2j*cos(pi*l/(n+1))
            return sqrt(abs(prod))
        
        print(num_tilings(8,8))

The code looks wrong. Shouldn’t the ranges go up to m and n?

No, Python ranges are half-open intervals. range(a, b) goes from a to b-1. That looks unnecessarily complicated, but it makes some things easier.

You said that there was no need for absolute values, but you code has one.

Yes, because while in theory the imaginary part will be exactly zero, in floating point arithmetic the imaginary part might be small but not zero.

Where did you find this formula?

Thirty-three Miniatures: Mathematical and Algorithmic Applications of Linear Algebra

Acoustic roughness examples

Amplitude modulated signals sound rough to the human ear. The perceived roughness increases with modulation frequency, then decreases, and eventually disappears. The point where roughness reaches is maximum depends on the the carrier signal, but for a 1 kHz tone roughness reaches a maximum for modulation at 70 Hz. Roughness also increases as a function of modulation depth.

Amplitude modulation multiplies a carrier signal by

1 + d sin(2π f t)

where d is the modulation depth, f is the modulation frequency, and t is time.

Here are some examples you can listen to. We use a pure 1000 Hz tone and Gaussian white noise as carriers, and vary modulation depth and frequency continuously over 10 seconds. he modulation depth example varies depth from 0 to 1. Modulation frequency varies from 0 to 120 Hz.

First, here’s a pure tone with increasing modulation depth.

 

Next we vary the modulation frequency.

 

Now we switch over to Gaussian white noise, first varying depth.

 

And finally white noise with varying modulation frequency. This one sounds like a prop-driven airplane taking off.

 

Related: Psychoacoustics consulting

Tensors 5: Scalars

There are two uses of the word scalar, one from linear algebra and another from tensor calculus.

In linear algebra, vector spaces have a field of scalars. This is where the coefficients in linear combinations come from. For real vector spaces, the scalars are real numbers. For complex vector spaces, the scalars are complex numbers. For vector spaces over any field K, the elements of K are called scalars.

But there is a more restrictive use of scalar in tensor calculus. There a scalar is not just a number, but a number whose value does not depend on one’s choice of coordinates. For example, the temperature at some location is a scalar, but the first coordinate of a location depends on your choice of coordinate system. Temperature is a scalar, but x-coordinate is not. Scalars are numbers, but not all numbers are scalars.

The linear algebraic use of scalar is more common among mathematicians, the coordinate-invariant use among physicists. The two uses of scalar is a special case of the two uses of tensor described in the previous post. Linear algebra thinks of tensors simply as things that take in vectors and return numbers. The physics/tensor analysis view of tensors includes behavior under changes of coordinates. You can think of a scalar as a oth order tensor, one that behaves as simply as possible under a change of coordinates, i.e. doesn’t change at all.

Tensors 4: Behavior under change of coordinates

In the first post in this series I mentioned several apparently unrelated things that are all called tensors, one of these being objects that behave a certain way under changes of coordinates. That’s what we’ll look at this time.

In the second post we said that a tensor is a multilinear functional. A k-tensor takes k vectors and returns a number, and it is linear in each argument if you hold the rest constant. We mentioned that this relates to the “box of numbers” idea of a tensor. You can describe how a k-tensor acts by writing out k nested sums. The terms in these sums are called the components of the tensor.

Tensors are usually defined in a way that has more structure. They vary from point to point in a space, and they do so in a way that in some sense is independent of the coordinates used to label these points. At each point you have a tensor in the sense of a multilinear functional, but the emphasis is usually on the changes of coordinates.

Components, indexes, and coordinates

Tensors in the sense that we’re talking about here come in two flavors: covariant and contravariant. They also come in mixtures; more on that later.

We consider two coordinate systems, one denoted by x‘s and another by x‘s with bars on top. The components of a tensor in the x-bar coordinate system will also have bars on top. For a covariant tensor of order one, the components satisfy

\bar{T}_i =T_r \frac{\partial x^r}{\partial \bar{x}^i}

First of all, coordinates are written with superscripts. So xr is the r coordinate, not x to the power r. Also, this uses Einstein summation notation: there is an implicit sum over repeated indexes, in this case of r.

The components of a contravariant tensor of order one satisfy similar but different equation:

\bar{T}^i =T^r \frac{\partial \bar{x}^i}{\partial x^r}

The components of a covariant tensor are written with subscripts, and the components of a contravariant tensor with superscripts. In the equation for covariant components, the partial derivatives are with respect to the new coordinates, the x bars. In the equation for contravariant components, the partial derivatives are with respect to the original coordinates, the x‘s. Mnemonic: when the indexes go down (covariant tensors) the new coordinates go down (in the partial derivatives). When the indexes go up, the new coordinates go up.

For covariant tensors of order two, the change of coordinate formula is

\bar{T}_{ij} = T_{rs} \frac{\partial x^r}{\partial\bar{x}^i} \frac{\partial x^s}{\partial \bar{x}^j}

Here there the summation convention says that there are two implicit sums, one over r and one over s.

The contravariant counter part says

 \bar{T}^{ij} = T^{rs} \frac{\partial\bar{x}^i}{\partial x^r} \frac{\partial\bar{x}^j}{\partial x^s}

In general you could have tensors that are a mixture of covariant and contravariant. A tensor with covariant order p and contravariant order q has p subscripts and q superscripts. The partial derivatives have x-bars on bottom corresponding to the covariant components and x-bars on top corresponding to contravariant components.

Relation to multilinear functionals

We initially said a tensor was a multilinear functional. A tensor of order k takes k vectors and returns a number. Now we’d like to refine that definition to take two kinds of vectors. A tensor with covariant order p and contravariant order q takes p contravariant vectors and q covariant vectors. In linear algebra terms, in stead of simply taking k elements of a vector space V, we say our tensor takes p vectors from the dual space V* and q vectors from V.

Relation to category theory

You may be familiar with the terms covariant and contravariant from category theory, or its application to object oriented programming. The terms are related. As Michael Spivak explains, “It’s very easy to remember which kind of vector field is covariant, and which is contravariant — it’s just the opposite of what it logically ought to be [from category theory].”

Tensors 3: Tensor products

In the previous post, we defined the tensor product of two tensors, but you’ll often see tensor products of spaces. How are these tensor products defined?

Tensor product splines

For example, you may have seen tensor product splines. Suppose you have a function over a rectangle that you’d like to approximate by patching together polynomials so that the interpolation has the specified values at grid points, and the patches fit together smoothly. In one dimension, you do this by constructing splines. Then you can bootstrap your way from one dimension to two dimensions by using tensor product splines. A tensor product spline in x and y is a sum of terms consisting of a spline in x and a spline in y. Notice that a tensor product spline is not simply a product of two ordinary splines, but a sum of such products.

If X is the vector space of all splines in the x-direction and Y the space of all splines in the y-direction, the space of tensor product splines is the tensor product of the spaces X and Y. Suppose a set si, for i running from 1 to n, is a basis for X. Similarly, suppose tj, for j running from 1 to m, is a basis for Y. Then the products si tj form a basis for the tensor product X and Y, the tensor product splines over the rectangle. Notice that if X has dimension n and Y has dimension m then their tensor product has dimension nm.  Notice that if we only allowed products of splines, not sums of products of splines, we’d get a much smaller space, one of dimension n+m.

Tensor products of vector spaces

We can use the same process to define the tensor product of any two vector spaces. A basis for the tensor product is all products of basis elements in one space and basis elements in the other. There’s a more general definition of tensor products that doesn’t involve bases sketched below.

Tensor products of modules

You can also define tensor products of modules, a generalization of vector spaces. You could think of a module as a vector space where the scalars come from a ring ideas of a field. Since rings are more general than fields, modules are more general than vector spaces.

The tensor product of two modules over a commutative ring is defined by taking the Cartesian product and moding out by the necessary relations to make things bilinear. (This description is very hand-wavy. A detailed presentation needs its own blog post or two.)

Tensor products of modules hold some surprises. For example, let m and n be two relatively prime integers. You can think of the integers mod m or n as a module over the integers. The tensor product of these modules is zero because you end up moding out by everything. This kind of collapse doesn’t happen over vector spaces.

Past and future

The first two posts in this series:

I plan to leave the algebraic perspective aside for a while, though as I mentioned above there’s more to come back to.

Next I plan to write about the analytic/geometric view of tensors. Here we get into things like changes of coordinates and it looks at first as if a tensor is something completely different.

Update: Tensors 4: Behavior under changes of coordinates

Tensors 2: Multilinear operators

The simplest definition of a tensor is that it is a multilinear functional, i.e. a function that takes several vectors, returns a number, and is linear in each argument. Tensors over real vector spaces return real numbers, tensors over complex vector spaces return complex numbers, and you could work over other fields if you’d like.

A dot product is an example of a tensor. It takes two vectors and returns a number. And it’s linear in each argument. Suppose you have vectors uv, and w, and a real number a. Then the dot product (u + vw) equals (uw) + (vw) and (auw) = a(uw).  This shows that dot product is linear in its first argument, and you can show similarly that it is linear in the second argument.

Determinants are also tensors. You can think of the determinant of an n by n matrix as a function of its n rows (or columns). This function is linear in each argument, so it is a tensor.

The introduction to this series mentioned the interpretation of tensors as a box of numbers: a matrix, a cube, etc. This is consistent with our definition because you can write a multilinear functional as a sum. For every vector that a tensor takes in, there is an index to sum over. A tensor taking n vectors as arguments can be written as n nested summations. You could think of the coefficients of this sum being spread out in space, each index corresponding to a dimension.

Tensor products are simple in this context as well. If you have a tensor S that takes m vectors at a time, and another tensor T that takes n vectors at a time, you can create a tensor that takes mn vectors by sending the first m of them to S, the rest to T, and multiply the results. That’s the tensor product of S and T.

The discussion above makes tensors and tensor products still leaves a lot of questions unanswered. We haven’t considered the most general definition of tensor or tensor product. And we haven’t said anything about how tensors arise in application, what they have to do with geometry or changes of coordinate. I plan to address these issues in future posts. I also plan to write about other things in between posts on tensors.

Next post in series: Tensor products

 

 

Tensors 1: What is a tensor?

Riemann tensor $R^\alpha_{\beta\gamma\delta}$

The word “tensor” is shrouded in mystery. The same term is applied to many different things that don’t appear to have much in common with each other.

You might have heared that a tensor is a box of numbers. Just as a matrix is a rectangular collection of numbers, a tensor could be a cube of numbers or even some higher-dimensional set of numbers.

You might also have heard that a tensor is something that has upper and lower indices, such as the Riemann tensor above, things that have arcane manipulation rules such as “Einstein summation.”

Or you  might have heard that a tensor is something that changes coordinates like a tensor. A tensor is as a tensor does. Something that behaves the right way under certain changes of variables is a tensor.

Tensor product $A \otimes B$

And then there’s things that aren’t called tensors, but they have tensor products. These seem simple enough in some cases—you think “I didn’t realize that has a name. So it’s called a tensor product. Good to know.” But then in other cases tensor products seem more elusive. If you look in an advanced algebra book hoping for a simple definition of a tensor product, you might be disappointed and feel like the book is being evasive or even poetic because it describes what a tensor product does rather than what it is. That is, the definition is behavioral rather than constructive.

What do all these different uses of the word “tensor” have to do with each other? Do they have anything to do with the TensorFlow machine learning library that Google released recently? That’s something I’d like to explore over a series of blog posts.

Next posts in the series:

From triangles to the heat equation

“Mathematics compares the most diverse phenomena and discovers the secret analogies that unite them.” — Joseph Fourier

The above quote makes me think of a connection Fourier made between triangles and thermodynamics.

Trigonometric functions were first studied because they relate angles in a right triangle to ratios of the lengths of the triangle’s sides. For the most basic applications of trigonometry, it only makes sense to consider positive angles smaller than a right angle. Then somewhere along the way someone discovered that it’s convenient to define trig functions for any angle.

Once you define trig functions for any angle, you begin to think of these functions as being associated with circles rather than triangles. More advanced math books refer to trig functions as circular functions. The triangles fade into the background. They’re still there, but they’re drawn inside a circle. (Hyperbolic functions are associated with hyperbolas the same way circular functions are associated with circles.)

Now we have functions that historically arose from studying triangles, but they’re defined on the whole real line. And we ask the kinds of questions about them that we ask about other functions. How fast do they change from point to point? How fast does their rate of change change? And here we find something remarkable. The rate of change of a sine function is proportional to a cosine function and vice versa. And if we look at the rate of change of the rate of change (the second derivative or acceleration), sine functions yield more sine functions and cosine functions yield more cosine functions. In more sophisticated language, sines and cosines are eigenfunctions of the second derivative operator.

Here’s where thermodynamics comes in. You can use basic physics to derive an equation for describing how heat in some object varies over time and location. This equation is called, surprisingly enough, the heat equation. It relates second derivatives of heat in space with first derivatives in time.

Fourier noticed that the heat equation would be easy to solve if only he could work with functions that behave very nicely with regard to second derivatives, i.e. sines and cosines! If only everything were sines and cosines. For example, the temperature in a thin rod over time is easy to determine if the initial temperature distribution is given by a sine wave. Interesting, but not practical.

However, the initial distribution doesn’t have to be a sine, or a cosine. We can still solve the heat equation if the initial distribution is a sum of sines. And if the initial distribution is approximately a sum of sines and cosines, then we can compute an approximate solution to the heat equation. So what functions are approximately a sum of sines and cosines? All of them!

Well, not quite all functions. But lots of functions. More functions than people originally thought. Pinning down exactly what functions can be approximated arbitrarily well by sums of sines and cosines (i.e. which functions have convergent Fourier series) was a major focus of 19th century mathematics.

So if someone asks what use they’ll ever have for trig identities, tell them they’re important if you want to solve the heat equation. That’s where I first used some of these trig identities often enough to remember them, and that’s a fairly common experience for people in math and engineering. Solving the heat equation reviews everything you learn in trigonometry, even though there are not necessarily any triangles or circles in sight.

Contradictory news regarding ABC conjecture

“Research is what I’m doing when I don’t know what I’m doing.” — Wernher von Braun

I find Shinichi Mochizuki’s proof of the abc conjecture fascinating. Not the content of the proof—which I do not understand in the least—but the circumstances of the proof. Most mathematics, no matter how arcane it appears to outsiders, is not that original. Experts in a specific area can judge just how much or how little a new paper adds to their body of knowledge, and usually it’s not much. But Mochizuki’s work is such a departure from the mainstream that experts have been trying to understand it for four years now.

Five days ago, Nature published an article headlined Monumental Proof to Torment Mathematicians for Years to Come.

… Kedlaya says that the more he delves into the proof, the longer he thinks it will take to reach a consensus on whether it is correct. He used to think that the issue would be resolved perhaps by 2017. “Now I’m thinking at least three years from now.”

Others are even less optimistic. “The constructions are generally clear, and many of the arguments could be followed to some extent, but the overarching strategy remains totally elusive for me,” says mathematician Vesselin Dimitrov of Yale University in New Haven, Connecticut. “Add to this the heavy, unprecedentedly indigestible notation: these papers are unlike anything that has ever appeared in the mathematical literature.”

But today, New Scientist has an article with the headline Mathematicians finally starting to understand epic ABC proof. According to this article,

At least 10 people now understand the theory in detail, says Fesenko, and the IUT papers have almost passed peer review so should be officially published in a journal in the next year or so.

It’s interesting that the proof is so poorly understood that experts disagree on just how well it is understood.

Related posts:

Practical continuity

I had a discussion recently about whether things are really continuous in the real world. Strictly speaking, maybe not, but practically yes. The same is true of all mathematical properties. There are no circles in the real world, not in the Platonic sense of a mathematical circle. But a circle is a very useful abstraction, and plenty of things are circles for practical purposes. In this post I’ll explain the typical definition of continuity and a couple of modifications for application.

A function f is continuous if nearby points go to nearby points. A discontinuity occurs when some points are close together but their images are far apart, such as when you have a threshold effect. For example, suppose you pay $3 to ship packages that weigh less than a pound and $5 to ship packages that weight a pound or more. Two packages can weigh almost the same, one a little less than a pound and one a little more, but not cost almost the same to ship. The difference in their shipping cost is $2, no matter how close together they are, as long as their on opposite sides of the one-pound threshold.

A practical notion of continuity has some idea of resolution. Suppose in our example that packages below one pound shipped for $3.00 and packages that weigh a pound or more ship for $3.05. You might say “I don’t care about differences of a nickle.” And so at that resolution, the shipping costs are continuous.

The key to understanding continuity is being precise about the two uses of “nearby” in the statement that a continuous function sends nearby points to nearby points. What do you mean by nearby? How close is close enough? In the pure mathematical definition of continuity, the answer is “as close as you want.” You specify any tolerance you like, no matter how small, and call it ε. If for any ε someone picks, it’s always possible to specify a small enough neighborhood around x that those points are mapped within ε of f(x), then f is continuous at x.

For applications, we modify this definition by putting a lower bound on ε. A function is continuous at x, for the purposes of a particular application, if for every ε larger than the resolution of the problem, you can find a neighborhood of x so small that all the points in that neighborhood are mapped within ε of f(x). In our shipping example, if you only care about differences in rates larger than $1, then if the rates change by $0.05 at the one-pound threshold, the rates are continuous as far as you’re concerned. But if the rates jump by $2 at one pound, then the rates are discontinuous for your purposes.

When you see a smooth curve on a high-resolution screen, it’s continuous as far as your vision is concerned. Nearby points on the curve go to points that are nearby as far as the resolution of your vision is concerned, even though strictly speaking the curve could have jump discontinuities at every pixel.

If you take a function that is continuous in the pure mathematical sense, then any multiple of that function is also continuous. If you make the function 10x larger, you just need to get closer to each point x to find a neighborhood so that all points get within ε of f(x). But in practical application, a multiple of a continuous function might not be continuous. If your resolution on shipping rates is $1, and the difference in cost between shipping a 15 ounce and a 17 ounce package is $0.05, then it’s continuous for you. But if the rates were suddenly 100x greater, now you care, because now the difference in cost is $5.

With the example of the curve on a monitor, if you were to zoom in on the image, at some point you’d see the individual pixels and so the image would no longer be continuous as far as your vision is concerned.

We’ve been focusing on what nearness means for the output, ε. Now let’s focus on nearness for the input. Introducing a restriction on ε let us say some functions are continuous for a particular application that are not continuous in the pure mathematical sense. We can also introduce a restriction on the resolution on the input, call it δ, so that the opposite is true: some functions are continuous in the pure mathematical sense that are not continuous for a particular application.

The pure mathematical definition of continuity of f at x is that for every ε > 0, there exists a δ > 0 such that if |x – y| < δ, then |f(x) – f(y)| < ε. But how small does δ have to be? Maybe too small for application. Maybe points would have to be closer together than they can actually be in practice. If a function changes rapidly, but smoothly, then it’s continuous in the pure mathematical sense, but it may be discontinuous for practical purposes. The more rapidly the function changes, the smaller δ has to be for points within δ of x to end up within ε of f(x).

So an applied definition of continuity would look something like this.

A function f is continuous at x, for the purposes of a particular application, if for every ε > the resolution of the problem, there exists a δ > the lower limit for that application, such that if |x – y| < δ, then |f(x) – f(y)| < ε.

Yet another way to define fractional derivatives

Fractional integrals are easier to define than fractional derivatives. And for sufficiently smooth functions, you can use the former to define the latter.

The Riemann-Liouville fractional integral starts from the observation that for positive integer n,

I^n f(x) &\equiv& \int_a^{x} \int_a^{x_1} \cdots \int_a^{x_{n-1}} f(x_n)\,dx_n\, dx_{n-1} \cdots \,dx_1 \\ &=& \frac{1}{(n-1)!} \int_a^x (x-t)^{n-1} f(t)\, dt

This motivates a definition of fractional integrals

I^\alpha f(x) = \frac{1}{\Gamma(\alpha)} \int_a^x (x-t)^{\alpha-1} f(t)\, dt

which is valid for any complex α with positive real part. Derivatives and integrals are inverses for integer degree, and we use this to define fractional derivatives: the derivative of degree n is the integral of degree –n. So if we could define fractional integrals for any degree, we could define a derivative of degree α to be an integral of degree -α.

Unfortunately we can’t do this directly since our definition only converges in the right half-plane. But for (ordinary) differentiable f, we can integrate the Riemann-Liouville definition of fractional integral by parts:

I^\alpha f(x) = \frac{(x-a)^\alpha}{\Gamma(\alpha+1)} f(a) + I^{\alpha+1} f'(x)

We can use the right side of this equation to define the left side when the real part of α is bigger than -1. And if f has two ordinary derivatives, we can repeat this process to define fractional integrals for α with real part bigger than -2. We can repeat this process to define the fractional integrals (and hence fractional derivatives) for any degree we like, provided the function has enough ordinary derivatives.

See previous posts for two other ways of defining fractional derivatives, via Fourier transforms and via the binomial theorem.

Quantifying how annoying a sound is

Eberhard Zwicker proposed a model for combining several psychoacoustic metrics into one metric to quantify annoyance. It is a function of three things:

  • N5, the 95th percentile of loudness, measured in sone (which is confusingly called the 5th percentile)
  • ωS, a function of sharpness in asper and of loudness
  • ωFR, fluctuation strength (in vacil), roughness (in asper), and loudness.

Specifically, Zwicker calculates PA, psychoacoutic annoyance, by

PA &=&N_5 \left( 1 + \sqrt{\omega_S^2 + \omega_{RF}^2}\right) \\ \omega_S &=& \left(\frac{S}{\mbox{acum}} - 1.75\right)^+ \log \left(\frac{N_5}{\mbox{sone}} + 10\right) \\ \omega_{FR} &=& \frac{2.18}{(N_5/\mbox{sone})^{0.4}} \left( 0.4 \frac{F}{\mbox{vacil}} + 0.6 \frac{R}{\mbox{asper}}\right)

A geometric visualization of the formula is given below.

Geometric representation of Zwicker's annoyance formula

Here’s an example of computing roughness using two sound files from previous posts, a leaf blower and a simulated kettledrum. I calibrated both to have sound pressure level 80 dB. But because of the different composition of the sounds, i.e. more high frequency components in the leaf blower, the leaf blower is much louder than the kettledrum (39 sone vs 15 sone) at the same sound pressure level. The annoyance of the leaf blower works out to about 56 while the kettledrum was only about 19.