I wanted to share with you a mathematical trick. Nothing too difficult, just not the first thing that someone would think. One of the important things in mathematics is to try to use the same tools for different jobs. Be creative!

We will use the Caley – Hamilton theorem to calculate arbitrary powers of a matrix A. But first, let’s remember what the Caley – Hamilton theorem says.

Caley – Hamilton theorem :

Let A be a square, matrix. Calculate it’s characteristic polynomial, . Then .

In other words, the matrix A verifies it’s characteristic polynomial, a non-trivial result. But why should that be useful in calculating matrix powers? And why shouldn’t we just do the calculation directly?

Let’s count the multiplications needed to calculate . So, we have n rows consisting of n elements each and we have to multiply each one of those rows with all of the n columns of A. So, in total we have to do multiplications. How does that scale with n? Let’s ask Wolfram.

That’s that. And those results are only valid if we want to calculate . If we want , we have and things start to get out of hand.

So you may wonder . . . since multiplying two matrices is so straightforward, how can we do better than that? Well, matrix multiplication is different than number multiplication in many things and it seems that we can do some stuff to make life easier. Let’s have the general matrix A.

where a, b, c and d are real constants. We move on to calculate A’s characteristic polynomial.

which by the previous theorem, we know what the result will be if we change with A.

So now, by using this theorem, we got back a formula for which involves only 2 multiplications, one addition and one subtraction. Not too shabby!

But the fun doesn’t end here. Since we know that this formula for holds, we can multiply the whole equation with A and then we will also have a formula for , consisting of lower powers of A. But because we already have a formula for all those powers in terms of A, we can express as a sum of a multiple of A and a multiple of I.

UPDATE : 7 august 2012

As it turns out, I was overly ambitious and confident in my back of the envelope calculation and of course my general formula is wrong, meaning that it doesn’t follow that pattern. It’s actually very easy to see that and I thank Vahid Ranjbar for the heads up. 🙂

But what about our general formula? Does it exist? In my opinion, there does not seem to be any general pattern that could make the formula “short and beautiful” as I hoped.

Here’s some mathematica code to replicate what I am about to show you.

Clear[y]

y = trA*A + detA;

(*Make sure that you input the above code in a different cell from the one below.*)y = Simplify[Expand[y*A] /. A^2 -> y]

So, even if you don’t speak mathematica, you should have got by now that this code generates the formula for if you run it times. So, here’s what we get for different values of n.

And we keep on getting more and more terms. The formula for has 24 and the formula for 73. So while feasible for small powers of A, we can forget it for large ones.

I think your general formula for A^k is incorrect does a correct version exist?

You are right of course. Thanks for the heads up!

It turns out that there is no such easy formula (expressing the power of the matrix to powers of the trace and the determinant) that works for every possible n and it gets messy with higher powers. I wonder why I didn’t immediately try the formula to see if it’s correct.

Anyway, sorry if you visited my blog with hopes of using such a result. To my knowledge, such a formula doesn’t exist.

If then . When is diagonal, powers are very easy. But I am sure there are better ways based on other matrix decompositions.

Sure, that is known. 🙂

The search for another way started from thinking about this and lower rank approximations.