How would you define the cosine of a matrix? If you’re trying to think of a triangle whose sides are matrices, you’re not going to get there. Think of power series. If a matrix A is square, you can stick it into the power series for cosine and call the sum the cosine of A.
This only works for square matrices. Otherwise the powers of A are not defined.
The power series converges and has many of the properties you’d expect. However, the usual trig identities may or may not apply. For example,
only if the matrices A and B commute, i.e. AB = BA. To see why this is necessary, imagine trying to prove the sum identity above. You’d stick A+B into the power series and do some algebra to re-arrange terms to get the terms on the right side of the equation. Along the way you’ll encounter terms like A2 + AB + BA + B2 and you’d like to factor that into (A+B)2, but you can’t justify that unless A and B commute.
Is cosine still periodic in this context? Yes, in the sense that cos(A + 2πI) = cos(A). This is because the diagonal matrix 2πI commutes with every matrix A and so the sum identity above holds.
Why would you want to define the cosine of a matrix? One application of analytic functions of a matrix is solving systems of differential equations. Any linear system of ODEs, of any order, can be rewritten in the form x‘ = Ax where x is a vector of functions and A is a square matrix. Then the solution is x(t) = etA x(0). And cos(At) is a solution to x‘ ‘+ A2x = 0, just as in calculus.
* * *
For daily posts on analysis, follow @AnalysisFact on Twitter.