The most straightforward way of multiplying two complex numbers requires four multiplications of real numbers:

def mult(a, b, c, d): re = a*c - b*d im = b*c + a*d return (re, im)

Gauss [1] discovered a way to do this with only three multiplications:

def gauss(a, b, c, d): t1 = a*c t2 = b*d re = t1 - t2 im = (a + b)*(c + d) - t1 - t2 return (re, im)

You can count the `*`

, `+`

, and `-`

symbols above and see that the first algorithm uses 4 multiplications and 2 additions/subtractions. The second algorithm uses 3 multiplications and 5 additions/subtractions.

If you’re carrying out calculations by hand, the latter is faster since multiplication takes much longer than addition or subtraction. In a computer, the latter may not be much faster or even any faster. It depends on the hardware, and whether the real and imaginary parts are integers or floats.

Gauss’s algorithm would be faster than the direct algorithm if the components were very large integers or very high-precision floats. The time required to add *n*-digit integers is *O*(*n*), but the time required to multiply *n*-digit numbers is at least *O*(*n* log *n*). So for large enough *n*, it’s worth doing some extra addition to save a multiplication.

I imagine someone has created an algorithm for quaternion multiplication analogous to the algorithm above for complex multiplication, but you could try it yourself as an exercise. I wonder how much you could reduce the number of component multiplications relative to the most direct approach. (Update: here’s my stab at it.)

## Related posts

- How to compute a complex square root
- Quaternion square roots
- How fast can you multiply two matrices?

[1] I’ve seen this described as “commonly attributed to Gauss.” Maybe there’s some debate over whether Gauss did this or whether he was the first.

A similar trick is used in [Karatsuba algorithm](https://en.wikipedia.org/wiki/Karatsuba_algorithm) for multiplying two polynoms using a divide-and-conquer approach.