So, as I promised, this will be the first of a series of posts on the awesomeness of Fourier’s (and some other people’s) work. For this first part, I want to comment on one way of finding the solution of the following boundary value problem or BVP :
This is what we call a Poisson problem, a non-homogeneous Laplace equation. Now under some reasonable assumptions on f, this problem has a solution (which is also unique). We just have to be able to integrate the right hand side.
In most of the practical applications, it would seem that we can’t do that. Either f(x) is complex enough to not allow for easy integration or it doesn’t have an antiderivative in terms of elementary functions. For example, one might want to find the solution to the following, slightly pathological, problem.
The function is nice and cool but has a slight problem. It doesn’t have a simple antiderivative. Luckily, numerical methods can come into play!
Now I don’t want to pass the message that we use numerical methods only for pathological examples such as the previous. The main reason is that we can’t solve an equation via analytical methods, so we have to make do with an approximation.
But how do we build that approximation to start with? Let’s see what we got. Since , that means that we are searching for a function that’s at least two times differentiable in the domain [a,b]. Let’s suppose for the time being that is more than two times that.
The next thing that we need to plan is what kind of approximation do we want? Would we like a (possibly) continuous function that’s very close to our solution in some norm? Do we only need to know the values in a limited set of points? It all depends on our problem.
We will surely need a set of points in [a,b] for a start. Let’s have them uniformly distributed and in an other post we will talk about what’s happening if you don’t make that assumption.
So we just pick a positive integer n and let . Then getting our points is very simple. Our j-th node shall be .
OK. So we have our nodes. Now we need to build that approximation. The main idea is that we will approximate our derivative with something called a centered difference. That is,
Where did that come from? First note that
And if that’s not apparent, think Taylor expansion. More specifically, expand and and then add them together. Then you will see that this is an approximation which gets closer and closer to the derivative as h goes to 0 with the speed of . That’s what we call a second order approximation.
Having this approximation, one is then tempted to define the following system of equations,
or in matrix format , where A is a suitable banded matrix. When this system gets solved by i.e Gaussian elimination, it will give back an approximation . A good algorithm can solve this problem in O(n) time and, by taking more points, to make it as good an approximation as we like.
Next time (which will be sooner, I promise) we will see what happens as we add more dimensions to our problem, trying to solve . We will find that using the finite difference method like we did here will result in a very slow computation and that we can speed things up if we use some of the properties of A.