Hi all! I thought I would take a break from my vacations to give you a heads up on what I’m up to.
First things first, I’ve studied the theory behind the fast multipole method, an algorithm used to solve N-body problems. The algorithm by itself is pretty complex at first sight. How could we do better than the naive idea, calculating everything with cost? But then, a couple of very bright people present it going from the algorithm, to a straightforward and then to the one. Cool math too! Instead of using a regular Taylor expansion for the far-field approximation, a multipole expansion (think of it as the generalization of Fourier series in three dimensions, which instead of sine and cosine uses spherical harmonics as the basis functions).
I’ve also added another tool to my problem-solving arsenal. 🙂 I can now use dimensional analysis to find quick estimates of a couple of “hard” integrals. For example,
which is not one of the hard integrals I was talking about but this method works for integrals including .
Using dimensional analysis, we first just assign to x a dimension, such as length. Let’s denote that with L. Then, dx will have L as dimension (since it’s a little bit of x). Reasoning out that must be dimensionless since it’s an exponent, we can see that the dimension of a must be . The integral is dimensionless. Thus, since both sides of an equation must have the same dimensions, we suppose that is a good candidate for (since has dimensions . C is a dimensionless constant. Now we just need to find C, which is easy to estimate (or, in this case, find exactly). Just pick and then,
Did we get the dimensionless constant right? Yes we did but by calculating the integral, we get
Thus we managed to conjure an interesting approximation. Here is a graph of the error between our approximation and the exact solution.
And as I mentioned, the same idea seems to apply in the case of more difficult integrals such as those with terms .
Still, there are a couple of things that I don’t understand about dimensional analysis (when used in this context and mainly because I don’t have any good references to study from). That’s why I am writing some notes on my own with what little I can find. Expect more on this in the near future. 😉
I am also trying to learn more about optimization of CUDA codes and how to apply all those ideas to an actual program. I was very lucky to find out about Git, cause now I can try whatever comes in mind without feeling bad or anxious that I might break my code.
It turns out that writing the code and optimizing it are two very different things. I have heard of branch divergence in CUDA but I wasn’t sure how it could affect the performance. Hopefully, I found an answer to that in Stack Overflow.
More news & math soon!