While working on a project, I had to learn a couple on stuff on semigroups of bounded operators, a really useful tool if, for example, you have some kind of Cauchy problem of the following form,

where A is an operator.

Reading about semi-groups reminded me of another problem that I had seen as an undergrad, the problem of commutativity of matrix multiplication. Well, it’s not a really a problem, more of a consequence of the way matrix multiplication is defined. But let’s take it from the top.

Let’s say that we have two square matrices A and B of dimensions . Obviously there are many ways in which we could define a multiplication between them. For example, we might want to multiply them element by element and get back a new matrix . So then, why is matrix multiplication that “complicated”?

The reason is this. A matrix represents a linear mapping between two vector spaces and since it’s useful to consider compositions of those linear transformations, the matrix product is just that. In other words, if A represents the linear mapping T and B represents the linear mapping F, then AB represents the composition T(F).

Since matrix multiplication is defined in this way, it’s not a surprise that generally , since we can write down linear transformations T and F for which the different compositions T(F) and F(T) will be unequal. Take for example and , then

So a good question is, if I have a matrix A, can I somehow talk about the structure of a B such that or, in more modern notation, such that the commutator ? Since we can represent linear mappings as matrices, this question extends naturally to them as well.

Extra notation; let’s denote the set of all B such that by .

Let’s take a look at a special case.

**Assumptions : **Let A be square & diagonalizable.

So, we have a full set of eigenvectors, written as columns in a matrix V and a set of eigenvalues , which we write as a diagonal matrix D. Then we can write A in the form, . Supposing that A is , something that we can do without loss of generality as far as the dimensions are concerned, that means that :

Can we now find a matrix ?

The answer is yes. Just suppose that B has the same eigenvectors as A but different eigenvalues. Then

Then, we can see that or, in other words, .

I wonder if there is a way to find non-trivial examples of non-diagonalizable matrices in Com(A) (Jordan normal form, perhaps?). Any thought on that would be welcome. 😀

### Like this:

Like Loading...

*Related*

Well, commuting with a given matrix A boils down to a system of linear equations in the coefficients of B, making the solution straightforward if the dimension is not too large.

Well yes, of course, this should be always possible if the matrix A is invertible. Thanks for that!

I was hoping for something that would give me a hint on some kind of structure of Com(A). For example, from the previous eigenvalue – eigenvector idea, I can see that [AB]=0 if and only if A, B preserve the direction of the same (eigen)vectors in .

But of course, noone has high hopes for special structure in a general case (like when having a random A). 🙂