Convergence of random variables
In this post, we prove a few technical results on convergence of random variables:
- If , then ; i.e. convergence in probability implies convergence in distribution;
- If and is a constant ( converges in distribution to a constant), then ; i.e. convergence in distribution to a constant implies convergence in probability.
To keep things simple, we assume the random variables , and the constant are scalars in the proofs; the results remain valid for (random) vectors.
Convergence in probability implies convergence in distribution. Recall the definition of convergence in probability: iff
Let and be the CDF’s of and respectively and be a continuity point of (i.e. is continuous at ). We have
As , we have . Similarly, we have
which implies (as ) . We combine the two inequalities to see that all accumulation points of are sandwiched between and :
This is valid for any , so we let tend to 0 to obtain , which is the definition of convergence in distribution.
Convergence in distribution to a constant implies convergence in probability. Recall the definition of convergence in distribution: iff
where and are the CDF’s of and respectively. If the limit is the constant (i.e. with probability one), then its CDF is
We have
Posted on October 20, 2021
from Ann Arbor, MI