Kinds of convergence for sequences of random variables

This is a quick post on the different kinds of convergence that a sequence of random variable can have in a probability space.

Let us suppose that we have a sequence of random variables X_{1},\ X_{2},\ X_{3},\ldots, in a probability space (\Omega, \mathcal{F}, P).  \Omega is the probability space, \mathcal{F} is the σ-algebra of measurable sets and P is the probability measure.

First of all, here’s a list of the different modes of convergence for such sequences, along with notation.

  1. X_{n}\overset{a.s}{\to}X is equivalent to saying that P\left(\{\omega:\lim_{n}X_{n}(\omega)=X(\omega)\}\right)=1. This is called almost sure convergence or convergence almost everywhere. 
  2. X_{n}\overset{P}{\to}X is equivalent to saying that \forall \epsilon>0,\ \lim_{n}P(|X_n-X|>\epsilon)=0. This is called convergence in probability. Essentially, this is just convergence in measure from real analysis.
  3. X_n\overset{L_p}{\to}X for p\in[1,\infty) is equivalent to saying that \mathbb{E}\left(|X_n-X|^{p}\right)\to 0 as n\to\infty. Here \mathbb{E} denotes the mean value of a random variable X. This is called convergence in L_p or convergence in mean
  4. X_n\overset{d}{\to}X is equivalent to saying that \mathbb{E}[f(X_n)]\to\mathbb{E}[f(X)] as n\to\infty for any f : integrable. This called convergence in distribution and it’s weak convergence in probability theory.

Now, here’s a graph that illustrates the equivalencies between the convergence modes stated.

ModesOfConvergence

 

Another thing to note is that there exist notions of Cauchy sequences of random variables and that we can say more than this graph represents. For example, if we have convergence in L_p, then we also have convergence in L_s for any 1\leq s\leq p. For a complete list of all possible cases, see here.

Advertisements