But will they commute?

While working on a project, I had to learn a couple on stuff on semigroups of bounded operators, a really useful tool if, for example, you have some kind of Cauchy problem of the following form,

\left\{\begin{array}{lr}\frac{du(t)}{dt}=Au\\u(0)=x\end{array}\right.

where A is an operator.

Reading about semi-groups reminded me of another problem that I had seen as an undergrad, the problem of commutativity of matrix multiplication. Well, it’s not a really a problem, more of a consequence of the way matrix multiplication is defined. But let’s take it from the top.

Let’s say that we have two square matrices A and B of dimensions 2\times 2. Obviously there are many ways in which we could define a multiplication between them. For example, we might want to multiply them element by element and get back a new matrix C=\left (a_{i,j}b_{i,j}\right)_{1\leq i,j\leq 2}. So then, why is matrix multiplication  that “complicated”?

The reason is this. A matrix represents a linear mapping between two vector spaces and since it’s useful to consider compositions of those linear transformations, the matrix product is just that. In other words, if A represents the linear mapping T and B represents the linear mapping F, then AB represents the composition T(F).

Since matrix multiplication is defined in this way, it’s not a surprise that generally AB\neq BA, since we can write down linear transformations T and F for which the different compositions T(F) and F(T) will be unequal. Take for example Tx=2x and Fx=x/2+1, then

T(F(x))=x+2\neq F(T(x))=x+1

So a good question is, if I have a matrix A, can I somehow talk about the structure of a B such that AB=BA or, in more modern notation, such that the commutator [A,B]=AB-BA=0? Since we can represent linear mappings as matrices, this question extends naturally to them as well.

Extra notation; let’s denote the set of all B such that [A,B]=0 by Com(A).

Let’s take a look at a special case.

Assumptions : Let A be square & diagonalizable.

So, we have a full set of eigenvectors, written as columns in a matrix V and a set of eigenvalues \lambda_{1},\ \lambda_{2},\ldots, which we write as a diagonal matrix D. Then we can write A in the form, A=VDV^{-1}.  Supposing that A is 2\times 2, something that we can do without loss of generality as far as the dimensions are concerned, that means that :

A=VDV^{-1}=V\left(\begin{array}{lr}\lambda_{1} &0\\ 0& \lambda_{2}\end{array}\right)V^{-1}

Can we now find a matrix B\in Com(A)?

The answer is yes. Just suppose that B has the same eigenvectors as A but different eigenvalues. Then

B=V\left(\begin{array}{lr}\gamma_{1}&0\\ 0&\gamma_{2}\end{array}\right)V^{-1}

Then, we can see that AB=BA or, in other words, B\in Com(A).

I wonder if there is a way to find non-trivial examples of non-diagonalizable matrices in Com(A) (Jordan normal form, perhaps?).  Any thought on that would be welcome. 😀

Advertisements

2 Comments

  1. Well, commuting with a given matrix A boils down to a system of linear equations in the coefficients of B, making the solution straightforward if the dimension is not too large.

    1. Well yes, of course, this should be always possible if the matrix A is invertible. Thanks for that!

      I was hoping for something that would give me a hint on some kind of structure of Com(A). For example, from the previous eigenvalue – eigenvector idea, I can see that [AB]=0 if and only if A, B preserve the direction of the same (eigen)vectors in \mathbb{R}^n.

      But of course, noone has high hopes for special structure in a general case (like when having a random A). 🙂

Comments are closed.