Fast matrix – vector multiplication

Hi all. This is the first part of my notes on ideas for fast multiplication of matrices with vectors. It’s also the first time that I use the LatexToWordpress tool (mentioned a few posts back). More to follow soon.

 Low – rank approximation ideas

2.1. Low – rank approximation

Let’s suppose that the matrix M has rank r. Then, if {r\leq N} we can use the singular value decomposition, writing M as follows,

\displaystyle M = RSV \ \ \ \ \ (1)

where R, V are complex and unitary matrices, {RR^{T}=I=VV^{T}}, and S is a diagonal matrix with non-zero elements, {\sigma_{1}\geq \sigma_{2}\geq \sigma_{3}\ldots\sigma_{N}\geq 0}.

This way we can carry out the multiplication at {O(N\cdot r)} by doing the product {RSVv}. Since many matrices have full rank, this technique is of little applicability but produces exact results (up to the floating point error).

2.2. Effective low – rank approximation

In order to be able to use the above ideas in more general cases we do the following. Let the SVD of the matrix M be the same as before. Then we choose a real number {\epsilon>0} and define the effective low – rank approximation of M, {M_{\epsilon}}, as the matrix {RS_{\epsilon}V}, where {S_{epsilon}} is the diagonal matrix with all the singular values {\sigma_{i}} such that

\displaystyle \sigma_{i}\geq \epsilon\cdot \sigma_{1} \ \ \ \ \ (2)

This new matrix {M_{\epsilon}} has rank

\displaystyle r_{\epsilon}=\#\left\{\sigma_{i}:\sigma_{i}\geq \epsilon\cdot \sigma_{1} \right\} \ \ \ \ \ (3)

Using this idea, we can carry out the multiplication at {O(N\cdot r_{\epsilon})} with an extra error, due to the approximation.

Advertisements