Running the Gregorian calendar backwards

Toward the end of last year I wrote several blog posts about calendars. The blog post about the Gregorian calendar began with this paragraph.

The time it takes for the earth to orbit the sun is not an integer multiple of the time it takes for the earth to rotate on its axis, nor is it a rational number with a small denominator. Why should it be? Much of the complexity of our calendar can be explained by rational approximations to an irrational number.

The post went on to say why the Gregorian calendar was designed as it was. In a nutshell, the average length of a year in the Gregorian calendar is 365 97/400 days, which matches the astronomical year much better than the Julian calendar, which has an average year length of 365 ¼ days.

In the Julian calendar, every year divisible by 4 is a leap year. The Gregorian calendar makes an exception: centuries are not leap years unless they are divisible by 400. So the year 2000 was a leap year, but 1700, 1800, and 1900 were not. Instead of having 100 leap years every 400 years, the Gregorian calendar has 97 leap years every 400 years.

Why does it matter whether the calendar year matches the astronomical year? In the short run it makes little difference, but in the long run it matters more. Under the Julian calendar, the Spring equinox occurred around March 21. If the world had remained on the Julian calendar, the date of the Spring equinox would drift later and later, moving into the summer and eventually moving through the entire year. Plants that bloom in March would eventually bloom in what we’d call July. And instead of the dog days of summer being in July, eventually they’d be in what we’d call November.

The Gregorian calendar wanted to do two things: stop the drift of the seasons. and restore the Spring equinox to March 21. The former could have been accomplished with little disruption by simply using the Gregorian calendar moving forward. The latter was more disruptive since it required removing days from the calendar.

The Julian year was too long, gaining 3 days every 400 years. So between the time of Jesus and the time of Pope Gregory, the calendar had drifted by about 12 days. In order to correct for this, the calendar would have to jump forward about a dozen years. If you think moving clocks forward an hour is disruptive, imagine moving the calendar forward a dozen days.

The Gregorian calendar didn’t remove 12 days; it removed 10. In the first countries to adopt the new calendar in 1582, Thursday, October 4th in 1582 was followed by Friday, October 15th. Note that Thursday was followed by Friday as usual. The seven-day cycle of days of the week was not disrupted. That’ll be important later on.

Why did the Gregorian calendar remove 10 days and not 12?

We can think of the 10 days that were removed as corresponding to previous years that the Julian calendar considered a leap year but that the Gregorian calendar would not have: 1500, 1400, 1300, 1100, 1000, 900, 700, 600, 500, and 300. Removing 10 days put the calendar in sync astronomically with the 300’s. This is significant because the Council of Nicaea met in 325 and made decisions regarding the calendar. Removing 10 days in 1582 put the calendar in sync with the calendar at the time of the council.

Now let’s push the calendar back further. Most scholars say Jesus was crucified on Friday, April 3, 33 AD. What exactly does “April 3, 33” mean? Was that a Julian date or a Gregorian date? There’s a possible difference of two days, corresponding to whether or not the years 100 and 200 were considered leap years.

If we were to push the Gregorian calendar back to the first century, the calendar for April in 33 AD would be the same as the calendar in 2033 AD (five cycles of 400 years later). April 3, 2033 is on a Sunday. (You could look that up, or use the algorithm given here.) April 3, 33 in the Julian calendar corresponds to April 1, 33 in the Gregorian calendar. So April 3, 33 was a Friday in the Julian calendar, the calendar in use at the time.

Some scholars date the crucifixion as Friday, April 7, 30 AD. That would also be a Julian date.

Related posts

Fredholm Alternative

The Fredholm alternative is so called because it is a theorem by the Swedish mathematician Erik Ivar Fredholm that has two alternative conclusions: either this is true or that is true. This post will state a couple forms of the Fredholm alternative.

Mr. Fredholm was interested in the solutions to linear integral equations, but his results can be framed more generally as statements about solutions to linear equations.

This is the third in a series of posts, starting with a post on kernels and cokernels, followed by a post on the Fredholm index.

Fredholm alternative warmup

Given an m×n real matrix A and a column vector b, either

Axb

has a solution or

AT y = 0 has a solution yTb ≠ 0.

This is essentially what I said in an earlier post on kernels and cokernels. From that post:

Suppose you have a linear transformation TV → W and you want to solve the equation Tx = b. … If c is an element of W that is not in the image of T, then Tx = c has no solution, by definition. In order for Tx = b to have a solution, the vector b must not have any components in the subspace of W that is complementary to the image of T. This complementary space is the cokernel. The vector b must not have any component in the cokernel if Tx = b is to have a solution.

In this context you could say that the Fredholm alternative boils down to saying either b is in the image of A or it isn’t. If b isn’t in. the image of A, then it has some component in the complement of the image of A, i.e. it has a component in the cokernel, the kernel of AT.

The Fredholm alternative

I’ve seen the Fredholm alternative stated several ways, and the following from [1] is the clearest. The “alternative” nature of the theorem is a corollary rather than being explicit in the theorem.

As stated above, Fredholm’s interest was in integral equations. These equations can be cast as operators on Hilbert spaces.

Let K be a compact linear operator on a Hilbert space H. Let I be the identity operator and A = IK. Let A* denote the adjoint of A.

  1. The null space of A is finite dimensional,
  2. The image of A is closed.
  3. The image of A is the orthogonal complement of the kernel of A*.
  4. The null space of A is 0 iff the image of A is H.
  5. The dimension of the kernel of A equals the dimension of the kernel of A*.

The last point says that the kernel and cokernel have the same dimension, and the first point says these dimensions are finite. In other words, the Fredholm index of A is 0.

Where is the “alternative” in this theorem?

The theorem says that there are two possibilities regarding the inhomogeneous equation

Ax = f.

One possibility is that the homogeneous equation

Ax = 0

has only the solution x = 0, in which case the inhomogeneous equation has a unique solution for all f in H.

The other possibility is that homogeneous equation has non-zero solutions, and the inhomogeneous has solutions has a solution if and only if f is orthogonal to the kernel of A*, i.e. if f is orthogonal to the cokernel.

Freedom and constraint

We said in the post on kernels and cokernels that kernels represent degrees of freedom and cokernels represent constraints. We can add elements of the kernel to a solution and still have a solution. Requiring f to be orthogonal to the cokernel is a set of constraints.

If the kernel of A has dimension n, then the Fredholm alternative says the cokernel of A also has dimension n.

If solutions x to Axf have n degrees of freedom, then right-hand sides f must satisfy n constraints. Each degree of freedom for x corresponds to a basis element for the kernel of A. Each constraint on f corresponds to a basis element for the cokernel that f must be orthogonal to.

[1] Lawrence C. Evans. Partial Differential Equations, 2nd edition

Fredholm index

The previous post on kernels and cokernels mentioned that for a linear operator TV → W, the index of T is defined as the difference between the dimension of its kernel and the dimension of its cokernel:

index T = dim ker T − dim coker T.

The index was first called the Fredholm index, because of it came up in Fredholm’s investigation of integral equations. (More on this work in the next post.)

Robustness

The index of a linear operator is robust in the following sense. If V and W are Banach spaces and TV → W is a continuous linear operator, then there is an open set around T in the space of continuous operators from V to W on which the index is constant. In other words, small changes to T don’t change its index.

Small changes to T may alter the dimension of the kernel or the dimension of the cokernel, but they don’t alter their difference.

Relation to Fredholm alternative

The next post discusses the Fredholm alternative theorem. It says that if K is a compact linear operator on a Hilbert space and I is the identity operator, then the Fredholm index of IK is zero. The post will explain how this relates to solving linear (integral) equations.

Analogy to Euler characteristic

We can make an exact sequence with the spaces V and W and the kernel and cokernel of T as follows:

0 → ker TVW → coker T → 0

All this means is that the image of one map is the kernel of the next.

We can take the alternating sum of the dimensions of the spaces in this sequence:

dim ker T − dim V + dim W − dim coker T.

If V and W have the same finite dimension, then this alternating sum equals the index of T.

The Euler characteristic is also an alternating sum. For a simplex, the Euler characteristic is defined by

V − EF

where V is the number of vertices, E the number of edges, and F the number of faces. We can extend this to higher dimensions as the number of zero-dimensional object (vertices), minus the number of one-dimensional objects (edges), plus the number of two-dimensional objects, minus the number of three dimensional objects, etc.

A more sophisticated definition of Euler characteristic is the alternating sum of the dimensions of cohomology spaces. These also form an exact sequence.

The Atiyah-Singer index theorem says that for elliptic operators on manifolds, two kinds of index are equal: the analytical index and the topological index. The analytical index is essentially the Fredholm index. The topological index is derived from topological information about the manifold.

This is analogous to the Gauss-Bonnet theorem that says you can find the Euler characteristic, a topological invariant, by integrating Gauss curvature, an analytic calculation.

Other posts in this series

This is the middle post in a series of three. The first was on kernels and cokernels, and the next is on the Fredholm alternative.

Kernel and Cokernel

The kernel of a linear transformation is the set of vectors mapped to 0. That’s a simple idea, and one you’ll find in every linear algebra textbook.

The cokernel is the dual of the kernel, but it’s much less commonly mentioned in textbooks. Sometimes the idea of a cokernel is there, but it’s not given that name.

Degrees of Freedom and Constrants

One way of thinking about kernel and cokernel is that the kernel represents degrees of freedom and the cokernel represents constraints.

Suppose you have a linear transformation T: VW and you want to solve the equation Txb. If x is a solution and Tk = 0, then xk is also a solution. You are free to add elements of the kernel of T to a solution.

If c is an element of W that is not in the image of T, then Txc has no solution, by definition. In order for Txb to have a solution, the vector b must not have any components in the subspace of W that is complementary to the image of T. This complementary space is the cokernel. The vector b must not have any component in the cokernel if Txb is to have a solution.

If W is an inner product space, we can define the cokernel as the orthogonal complement to the image of T.

Another way to think of the kernel and cokernel is that in the linear equation Axb, the kernel consists of degrees of freedom that can be added to x, and the cokernel consists of degrees of freedom that must be removed from b in order for there to be a solution.

Cokernel definitions

You may also see the cokernel defined as the quotient space W / image(T). This is not the same space as the complement of the image of T. The former is a subset of W and the latter is a new space. However, these two spaces are isomorphic. This is a little bit of foreshadowing: the most general idea of a cokernel will only hold up to isomorphism.

You may also see the cokernel defined as the kernel of the adjoint of T. This suggests where the name “cokernel” comes from: the dual of the kernel of an operator is the kernel of the dual of the operator.

Kernel and cokernel dimensions

I mentioned above that there are multiple ways to define cokernel. They don’t all define the same space, but they define isomorphic spaces. And since isomorphic spaces have the same dimension, all definitions of cokernel give spaces with the same dimension.

There are several big theorems related to the dimensions of the kernel and cokernel. These are typically included in linear algebra textbooks, even those that don’t use the term “cokernel.” For example, the rank-nullity theorem can be stated without explicitly mentioning the cokernel, but it is equivalent to the following.

For a linear operator T: VW ,

dim V − dim W = dim ker T − dim coker T.

When V or W are finite dimensional, both sides are well defined. Otherwise, the right side may be defined when the left side is not. For example, let V and W both be the space of functions analytic in the entire complex plane and let T be the operator that takes the second derivative. Then the left side is ∞ − ∞ but the right side is 2: the kernel is all functions of the form azb and the cokernel is 0 because every analytic function has an antiderivative.

The right hand side of the equation above is the definition of the index of a linear operator. This is the subject of the next post.

Full generality

Up to this point the post has discussed kernels and cokernels in the context of linear algebra. But we could define the kernel and cokernel in contexts with less structure, such as groups, or more structure, such as Sobolev spaces. The most general definition is in terms of category theory.

Category theory makes the “co” in cokernel obvious. The cokernel will is the dual of the kernel in the same way that every thing in category is related to its co- thing: simply turn all the arrows around. This makes the definition of cokernel easier, but it makes the definition of kernel harder.

We can’t simply define the kernel as “the stuff that gets mapped to 0” because category has no way to look inside objects. We can only speak of objects in terms of how they interact with other objects in their category. There’s no direct way to define 0, much less things that map to 0. But we can define something that acts like 0 (initial objects), and things that act like maps to 0 (zero morphisms), if the category we’re working in contains such things; not all categories do.

For all the tedious details, see the nLab articles on kernel and cokernel.

Related posts

 

Topological Abelian Groups

This post will venture further into abstract mathematics than most of my posts. If this isn’t what you’re looking for, you might try browsing here for more concrete articles.

Incidentally, although I’m an applied mathematician, I also appreciate pure math. I imagine most applied mathematicians do as well. But what I do not appreciate is pseudo-applied math, pure math that pretends to be more useful than it is. Pure math is elegant. Applied math is useful. The best math is elegant and useful. Pseudo-applied math is the worst because it is neither elegant nor useful [1].

***

A common theme in pure mathematics, and especially the teaching of pure mathematics, is to strip items of interest down to their most basic properties, then add back properties gradually. One motivation for this is to prove theorems assuming no more structure than necessary.

Choosing a level of abstraction

For example, we can think of the Euclidean plane as the vector space ℝ², but we can think of it as having less structure or more structure. If we just think about adding and subtracting vectors, and forget about scalar multiplication for a moment, then ℝ² is an Abelian group. We could ignore the fact that addition is commutative and think of it simply as a group. We could continue to ignore properties and go down to monoids, semigroups, and magmas.

Going the other direction, there is more to the plane than it’s algebraic structure. We can think of ℝ² as a topological space, in fact a Hausdorff space, and in fact a metric space. We could think of the plane as topological vector space, a Banach space, and more specifically a Hilbert space.

In short, there are many ways to classify the plane as a mathematical object, and we can pick the one best suited for a particular purpose, one with enough structure to get done what we want to get done, but one without additional structure that could be distracting or make our results less general.

Topological groups

A topological group is a set with a topological structure, and a group structure. Furthermore, the two structures must play nicely together, i.e. we require the group operations to be continuous.

Unsurprisingly, an Abelian topological group is a topological group whose group structure is Abelian.

Not everything about Abelian topological groups is unsurprising. The motivation for this post is a surprise that we’ll get to shortly.

Category theory

A category is a collection of objects and structure-preserving maps between those objects. The meaning of “structure-preserving” varies with context.

In the context of vector spaces, maps are linear transformations. In the context of groups, the maps are homomorphisms. In the context of topological spaces, the maps are continuous functions.

In the previous section I mentioned structures playing nicely together. Category theory makes this idea of playing together nicely explicit by requiring maps to have the right structure-preserving properties. So while the category of groups has homomorphisms and the category of topological spaces has continuous functions, the category of topological groups has continuous homomorphisms.

Abelian categories

The category of Abelian groups is much nicer than the category of groups. This takes a while to appreciate. Abelian groups are groups after all, so isn’t the category of Abelian groups just a part of the category of groups? No, it’s more subtle than that. Here’s a post that goes into the distinction between the categories of groups and Abelian groups.

The category of Abelian groups is so nice that the term Abelian category was coined to describe categories that are as nice as the category of Abelian groups. To put it another way, the category of Abelian groups is the archetypical Abelian category.

Now here’s the surprise I promised above: the category of topological Abelian groups is not an Abelian category. More on that at nLab.

If you naively think of an Abelian category as a category containing Abelian things, then this makes no sense. If a grocery cart is a cart containing groceries, then you’d think a gluten-free grocery cart is a grocery cart containing gluten-free groceries.

A category is not merely a container, like a shopping cart, but a working context. What makes an Abelian category special is that it has several nice properties as a working context. If that idea is new to you, I’d recommend carefully reading this post.

Related posts

[1] This reminds me of the quote by William Morris: “Have nothing in your houses that you do not know to be useful or believe to be beautiful.”

Millionth powers

I was poking around Richard Stanley’s site today and found the following problem on his miscellaneous page.

Find a positive integer n < 10,000,000 such that the first four digits (in the decimal expansion) of n1,000,000 are all different. The problem should be solved in your head.

The solution is not unique, but the solution Stanley gives is n = 1,000,001. Why should that work?

Let M = 1,000,000. We will show that the first four digits of (M + 1)M are 2718.

\begin{align*} (1 + M)^M &= \left(M\left(1 + \frac{1}{M}\right)\right)^M \\ &= M^M \left(1 + \frac{1}{M}\right)^M \\ &\approx M^M e \\ &= 2718\ldots \end{align*}

This uses the fact that (1 + 1/n)ne as n → ∞. If you’re doing this in your head, as Stanley suggests, you’re going to have to take it on faith that setting nM will give you at least 4 decimals of e, which it does.

If you allow yourself to use a computer, you can use the bounds

\left(1 + \frac{1}{n}\right)^n < e < \left(1 + \frac{1}{n}\right)^{n+1}

to prove that sticking in nM gives you a value between 2.718280 and 2.718283. So in fact we get 6 correct decimals, and we only needed 4.

There are many solutions to Stanley’s puzzle, the smallest being 4. The first four digits of 4M are 9802. How could you determine this?

You may not be able to compute 4M and look at its first digits, depending on your environment, but you can tell the first few digits of a number from its approximate logarithm.

log10 4M = M log10 4 = 602059.9913279624.

It follows that

4M = 10602059 100.9913279624 = 9.80229937666385 × 10602059.

There are many other solutions: 7, 8, 12, 14, 16, …

Related posts

Mr. Bell and Bell numbers

One day Eric Temple Bell (1883–1960) was looking at the power series for the double exponential function, exp(exp(x)) and noticed a similarity to the power series for exp(x). You can find his account in [1]. He would have calculated the series by hand, but we have the advantage of software like Mathematica.

We can get the first five terms of the series, centered at 0, with the command

    Series[Exp[Exp[x]], {x, 0, 5}]

This give us

e+e x+e x^2+\frac{5 e x^3}{6}+\frac{5 e x^4}{8}+\frac{13 e x^5}{30}+ \ldots

If you pull out the factor of e from each term, and change the denominators to match those in the power series for exp(x) you get

e\left( 1 + \frac{1}{1!} x + \frac{2}{2!} x^2 + \frac{5}{3!} x^3 + \frac{15}{4!}x^4 + \frac{52}{5!} x^5 + \ldots\right)

with integers in all the numerators. It’s not obvious a priori that these numbers should even be integers, but they are,

Bell called the sequence numerators the exponential numbers: 1, 1, 2, 5, 15, 52, … The sequence is now known as the Bell numbers despite Bell’s modesty. Bell wasn’t the first to study this sequence of numbers, but he developed their properties more fully.

Applications

Bell numbers come up a lot in applications, which is why Bell wasn’t the first to notice them. (He may have been the first to come to them via their exponential generating function.) For example, the nth Bell number Bn is the number of ways to partition a set of n labeled items. Bn is also the nth moment of a Poisson random variable with λ = 1.

Bell’s triangle

There is a construction of Bell numbers analogous to Pascal’s triangle. Charles Sanders Peirce discovered what we now call Bell’s triangle fifty years before Bell discovered the Bell numbers.

To create Bell’s triangle, start with a row containing only 1.

The first number in each successive row is set to the last number in the previous row.

Then fill in the rest of the row by adding the number to the left and the number directly above

    1
    1  2
    2  3  5
    5  7 10 15
   15 20 27 37 52
   …

The numbers in the first column are the Bell numbers.

Related posts

[1] E. T. Bell. Exponential Numbers. The American Mathematical Monthly, Vol. 41, No. 7 (Aug. – Sep., 1934), pp. 411–419.

How many ways can you triangulate a regular polygon?

In this post we want to count the number of ways to divide a regular polygon [1] into triangles by connecting vertices with straight lines that do not cross.

Squares

For a square, there are two possibilities: we either connect the NW and SE corners,

or we connect the SW and NE corners.

Pentagons

For a pentagon, we pick one vertex and connect it to both non-adjacent vertices.

We can do this for any vertex, so there are five possible triangulations. All five triangulations are rotations of the same triangulation. What if we consider these rotations as equivalent? We’ll get to that later.

Hexagons

For a hexagon, things are more interesting. We can again pick any vertex and connect it to all non-adjacent vertices, giving six triangulations.

But there are more possibilities. We could connect every other vertex, creating an equilateral triangle inside. We can do this two ways, connecting either the even-numbered vertices or the odd-numbered vertices. Either triangulation is a rotation of the other.

We can also connect the vertices in a zig-zag pattern, creating an N-shaped pattern inside. We could also rotate this triangulation one or two turns. (Three turns gives us the same pattern again.)

Finally, we could also connect the vertices creating a backward N pattern.

General case

So to recap, we have 2 ways to triangulate a square, 5 ways to triangulate a pentagon, and 6 + 2 + 3 + 3 = 14 ways to triangulate a hexagon. Also, there is only 1 way to triangulate a triangle: do nothing.

Let Cn be the number of ways to triangulate a regular (n + 2)-gon. Then we have C1 = 1, C2 = 2, C3 = 5, and C4 = 14.

In general,

C_n = \frac{1}{n+1}\binom{2n}{n}

which is the nth Catalan number.

Catalan numbers are the answers to a large number of questions. For example, Cn is also the number of ways to fully parenthesize a product of n + 1 terms, and the number of full binary trees with n + 1 nodes.

The Catalan numbers have been very well studied, and we know that asymptotically

C \sim \frac{4^n}{n^{3/2} \sqrt{\pi}}

so we can estimate Cn for large n. For example, we could use the formula above to estimate the number of ways to triangulate a 100-gon to be 5.84 ×1055. The 98th Catalan number is closer to 5.77 ×1055. Two takeaways: Catalan numbers grow very quickly, and we can estimate them within an order of magnitude using the asymptotic formula.

Equivalence classes

Now let’s go back and count the number of triangulations again, considering some variations on a triangulation to be the same triangulation.

We’ll consider rotations of the same triangulation to count only once. So, for example, we’ll say there is only one triangulation of a pentagon and four triangulations of a hexagon. If we consider mirror images to be the same triangulation, then there are three triangulations of a hexagon, counting the N pattern and the backward N pattern to be the same.

Grouping rotations

The number of equivalence classes of n-gon triangulations, grouping rotations together, is OEIS sequence A001683. Note that the sequence starts at 2.

OEIS gives a formula for this sequence:

a(n) = \frac{1}{2n}C_{n-2} + \frac{1}{4}C_{n/2-1} + \frac{1}{2} C_{\lceil (n+1)/2\rceil - 2} + \frac{1}{3} C_{n/3 - 1}
where Cx is zero when x is not an integer. So a(6) = 4, as expected.

Grouping rotations and reflections

The number of equivalence classes of n-gon triangulations, grouping rotations and reflections together, is OEIS sequence A000207. Note that the sequence starts at 3.

OEIS gives a formula for this sequence as well:

a(n) = \frac{1}{2n}C_{n-2} + \frac{1}{4}C_{n/2-1} + \frac{1}{2} C_{\lceil (n+1)/2\rceil - 2} + \frac{1}{3} C_{n/3 - 1}

As before, Cx is zero when x is not an integer. This gives a(6) = 3, as expected.

The formula on the OEIS page is a little confusing since it uses C(n) to denote Cn−2 .

Related posts

[1] Our polygons do not need to be regular, but they do need to be convex.

1000 most common words

Last week I wrote about a hypothetical radio station that plays the top 100 songs in some genre, with songs being chosen randomly according to Zipf’s law. The nth most popular song is played with probability proportional to 1/n.

This post is a variation on that post looking at text consisting of the the 1,000 most common words in a language, where word frequencies follow Zipf’s law.

How many words of text would you expect to read until you’ve seen all 1000 words at least once? The math is the same as in the radio station post. The simulation code is the same too: I just changed a parameter from 100 to 1,000.

The result of a thousand simulation runs was an average of 41,246 words with a standard deviation of 8,417.

This has pedagogical implications. Say you were learning a foreign language by studying naturally occurring text with a relatively small vocabulary, such as newspaper articles. You might have to read a lot of text before you’ve seen all of the thousand most common words.

On the one hand, it’s satisfying to read natural text. And it’s good to have the most common words reinforced the most. But it might be more effective to have slightly engineered text, text that has been subtly edited to make sure common words have not been left out. Ideally this would be done with such a light touch that it isn’t noticeable, unlike heavy-handed textbook dialogs.

Miscellaneous mathematical symbols

As longtime readers of this blog have probably noticed, I like to poke around in Unicode occasionally. It’s an endless system of rabbit holes to explore.

This morning I was looking at the Miscellaneous Mathematical Symbols block. These are mostly obscure symbols, though I’m sure for each symbol that I think is obscure, there is someone out there who uses it routinely.

Perpendicular

The only common symbol in this block is ⟂ (U+27C2) for perpendicular. Even so, this symbol is a variation on ⊥ (U+22A5). The distinction is semantic rather than visual: U+22A5 is used for the Boolean value “false.”

In addition to using ⟂ to denote perpendicular lines, some (e.g. Donald Knuth) use the symbol to denote that two integers are relatively prime.

Geometric algebra

The block contains ⟑ (U+27D1) which is used in geometric algebra for the geometric product, a.k.a. the dot-wedge product. The block also contains the symbol for the dual operator ⟇ (U+27c7), the geometric antiproduct. Incidentally, Eric Lengyel’s Projective Geometric Algebra site officially sponsors these two Unicode symbols.

I’m sure these symbols predate Eric Lengyel’s use of them, but I can only recall seeing them used in his work.

Database joins

Unicode has four symbols for database joins. The bowtie symbol ⨝ (U+2A1D) is used for inner (natural) joins is in another block. The Miscellaneous Mathematical Symbols block has three other symbols for outer joins: left, right, and full. I posted a table of these on @CompSciFact this morning.

Angle brackets

The Miscellaneous Mathematical Symbols block also has angle brackets: ⟨ (U+27E8) and ⟩ (U+27E9). These correspond to \langle and \rangle in LaTeX. I’ve used the LaTeX commands, but I wasn’t sure whether I’d ever used the Unicode characters. I searched this blog and found that I did indeed use the characters in my post on the Gram matrix.

More posts on math notation