While working on a project, I had to learn a couple on stuff on semigroups of bounded operators, a really useful tool if, for example, you have some kind of Cauchy problem of the following form,
where A is an operator.
Reading about semi-groups reminded me of another problem that I had seen as an undergrad, the problem of commutativity of matrix multiplication. Well, it’s not a really a problem, more of a consequence of the way matrix multiplication is defined. But let’s take it from the top.
Let’s say that we have two square matrices A and B of dimensions . Obviously there are many ways in which we could define a multiplication between them. For example, we might want to multiply them element by element and get back a new matrix . So then, why is matrix multiplication that “complicated”?
The reason is this. A matrix represents a linear mapping between two vector spaces and since it’s useful to consider compositions of those linear transformations, the matrix product is just that. In other words, if A represents the linear mapping T and B represents the linear mapping F, then AB represents the composition T(F).
Since matrix multiplication is defined in this way, it’s not a surprise that generally , since we can write down linear transformations T and F for which the different compositions T(F) and F(T) will be unequal. Take for example and , then
So a good question is, if I have a matrix A, can I somehow talk about the structure of a B such that or, in more modern notation, such that the commutator ? Since we can represent linear mappings as matrices, this question extends naturally to them as well.
Extra notation; let’s denote the set of all B such that by .
Let’s take a look at a special case.
Assumptions : Let A be square & diagonalizable.
So, we have a full set of eigenvectors, written as columns in a matrix V and a set of eigenvalues , which we write as a diagonal matrix D. Then we can write A in the form, . Supposing that A is , something that we can do without loss of generality as far as the dimensions are concerned, that means that :
Can we now find a matrix ?
The answer is yes. Just suppose that B has the same eigenvectors as A but different eigenvalues. Then
Then, we can see that or, in other words, .
I wonder if there is a way to find non-trivial examples of non-diagonalizable matrices in Com(A) (Jordan normal form, perhaps?). Any thought on that would be welcome. 😀