Solve that Poisson equation! Fast! Faster!!

Hi everyone!

So, as I promised, this will be the first of a series of posts on the awesomeness of Fourier’s (and some other people’s) work. For this first part, I want to comment on one way of finding the solution of the following boundary value problem or BVP :

U_{xx}(x)=f(x),\ x\in[a,b],\ u(a)=u(b)=0

This is what we call a Poisson problem, a non-homogeneous Laplace equation. Now under some reasonable assumptions on f, this problem has a solution (which is also unique). We just have to be able to integrate the right hand side.

  In most of the practical applications, it would seem that we can’t do that. Either f(x) is complex enough to not allow for easy integration or it doesn’t have an antiderivative in terms of elementary functions. For example, one might want to find the solution to the following, slightly pathological, problem.

  U_{xx}(x)=e^{-x^2},\ x\in[a,b],\ u(a)=u(b)=1

 The function f(x) = exp(-x^2) is nice and cool but has a slight problem. It doesn’t have a simple antiderivative. Luckily, numerical methods can come into play!

Now I don’t want to pass the message that we use numerical methods only for pathological examples such as the previous. The main reason is that we can’t solve an equation via analytical methods, so we have to make do with an approximation.

But how do we build that approximation to start with? Let’s see what we got. Since U_{xx}(x)=f(x), that means that  we are searching for a function that’s at least two times differentiable in the domain [a,b]. Let’s suppose for the time being that U(x) is more than two times that.

The next thing that we need to plan is what kind of approximation do we want? Would we like a (possibly) continuous function that’s very close to our solution in some norm? Do we only need to know the values in a limited set of points? It all depends on our problem.

We will surely need a set of points in [a,b] for a start. Let’s have them uniformly distributed and in an other post we will talk about what’s happening if you don’t make that assumption.

So we just pick a positive integer n and let h=(b-a)/n. Then getting our points is very simple. Our j-th node shall be x_{j}=j\cdot h + a,\ j=0,\ldots,n.

OK. So we have our nodes. Now we need to build that approximation. The main idea is that we will approximate our derivative with something called a centered difference. That is,

\boxed{ u_{xx}(x_{i})\simeq \frac{u(x_{i+1})-2u(x_{i})+u(x_{i-1})}{h^2}}

Where did that come from? First note that

u_{xx}(x_{i})=\lim_{h\to 0}\frac{u(x_{i+1})-2u(x_{i})+u(x_{i-1})}{h^2}

And if that’s not apparent, think Taylor expansion. More specifically, expand u(x_{i+1})=u(x_{i}+h) and u(x_{i-1})=u(x_{i}-h) and then add them together. Then you will see that this is an approximation which gets closer and closer to the derivative as h goes to 0 with the speed of h^2. That’s what we call a second order approximation.

Having this approximation, one is then tempted to define the following system of equations,

u_{i+1}-2u_{i}+u_{i-1}=h^2\cdot f(x_{i}),i=1,\ldots,n

or in matrix format Au=F, where A is a suitable banded matrix. When this system gets solved by i.e Gaussian elimination, it will give back an approximation u_{i}\simeq u(x_{i}). A good algorithm can solve this problem in O(n) time and, by taking more points, to make it as good an approximation as we like.

Next time (which will be sooner, I promise) we will see what happens as we add more dimensions to our problem, trying to solve \Delta u = f(x),\ x\in\mathbb{R}^n. We will find that using the finite difference method like we did here will result in a very slow computation and that we can speed things up if we use some of the properties of A.

Advertisements

6 Comments

  1. You are right on that. What I meant to say is that reasonable assumptions are needed on f so that this problem can be solved, like the fact that f should be an integrable function on the domain solved. But thanks, it wasn’t clear and I will edit that part out.

Comments are closed.