STATS 413

Convergence of random variables

In this post, we prove a few technical results on convergence of random variables:

  1. If (Xn)n=1pX, then (Xn)n=1dX; i.e. convergence in probability implies convergence in distribution;
  2. If (Xn)n=1dx and x is a constant ((Xn)n=1 converges in distribution to a constant), then (Xn)n=1px; i.e. convergence in distribution to a constant implies convergence in probability.

To keep things simple, we assume the random variables (Xn)n=1, X and the constant x are scalars in the proofs; the results remain valid for (random) vectors.

Convergence in probability implies convergence in distribution. Recall the definition of convergence in probability: (Xn)n=1pX iff

P(|XnX|>ϵ)0 for any ϵ>0.

Let Fn and F be the CDF’s of Xn and X respectively and t be a continuity point of F (i.e. F is continuous at t). We have

Fn(t)=P(Xnt)=P(Xnt,Xt+ϵ)+P(Xnt,X>t+ϵ)P(Xt+ϵ)+P(|XnX|>ϵ)=F(t+ϵ)+P(|XnX|>ϵ).

As n, we have lim supnFn(t)F(t+ϵ). Similarly, we have

F(tϵ)=P(Xtϵ)=P(Xtϵ,Xnt)+P(Xtϵ,Xn>t)P(Xnt)+P(|XnX|>ϵ)=Fn(t)+P(|XnX|>ϵ),

which implies (as n) F(tϵ)lim infnFn(t). We combine the two inequalities to see that all accumulation points of F1(t),F2(t), are sandwiched between F(tϵ) and F(t+ϵ):

F(tϵ)lim infnFn(t)lim supnFn(t)F(t+ϵ).

This is valid for any ϵ>0, so we let ϵ tend to 0 to obtain limnFn(t)=F(t), which is the definition of convergence in distribution.

Convergence in distribution to a constant implies convergence in probability. Recall the definition of convergence in distribution: (Xn)n=1dX iff

Fn(t)F(t) at all continuity points of F,

where Fn and F are the CDF’s of Xn and X respectively. If the limit X is the constant x (i.e. X=x with probability one), then its CDF is

F(t)={0t<x1tx.

We have

P(|Xnx|>ϵ)=P(Xn<xϵ)+P(Xn>x+ϵ)=Fn(xϵ)+(1Fn(x+ϵ))=0+11.

Posted on October 20, 2021 from Ann Arbor, MI