• Nie Znaleziono Wyników

Types of conditional convergence

N/A
N/A
Protected

Academic year: 2021

Share "Types of conditional convergence"

Copied!
9
0
0

Pełen tekst

(1)

U N I V E R S I T A T I S M A R I A E C U R I E - S K Ł O D O W S K A L U B L I N – P O L O N I A

VOL. LIX, 2005 SECTIO A 97–105

WIOLETTA NOWAK and WIESŁAW ZIĘBA

Types of conditional convergence

Abstract. The aim of this paper is to investigate relations between different types of conditional convergence. Results presented in this paper generalize theorems obtained by P. Fernandez [2] and A. R. Padmanabhan [5].

1. Introduction. Let (Ω, A, P ) be a probability space, and let F be a sub-σ-field contained in A. We denote by EFX conditional expectation of X with respect to F . Let L+ (L+(F )) be the set of nonnegative random variables (F -measurable). For X ∈ L+we can define the conditional expec- tation EFX as in [4]. For X = X+−X, where X+= max(X, 0) and X= max(−X, 0), we define the conditional expectation EFX = EFX+−EFX if min(EFX+, EFX) < ∞ a.s. If max(EFX+, EFX) < ∞ a.s., then we say that X ∈ L1(F ).

We will denote by FX(x) = P {ω : X(ω) < x} the distribution function for X, by ζFX the set of continuity points of FX(x), that is ζFX = {x : FX(x) = FX(x+)}.

Definition 1.1. The conditional distribution function for random variable X given σ-field F we will call a process F (x, ω) = EFI[X<x](ω) such that F (x, ω) is left-continuous and nondecreasing.

2000 Mathematics Subject Classification. Primary 60E05, Secondary 62E20.

Key words and phrases. Convergence in probability, conditional expectation.

(2)

Obviously limx→+∞F (x, ·) = 1 a.s. and limx→−∞F (x, ·) = 0 a.s. Note that if x ∈ ζFX, then limε→0F (x − ε, ·) = F (x, ·) a.s.

We will say that random variables X and Y have the same conditional distribution if for each x ∈ R

EFI[X<x] = EFI[Y <x] a.s.

Note that if random variables X and Y have the same conditional dis- tribution, then these variables have the same distribution, because for each x ∈ R we have

FX(x) = P (X < x) = EI[X<x] = E(EFI[X<x]) = E(EFI[Y <x]) = FY(x).

The following example shows that the opposite implication is not true.

Example 1. Let (Ω, A, P ) = ([0, 1], B([0, 1]), µ), where µ denotes Lebesgue measure, F = A,

X(ω) =

(0, ω ∈ [0,12), 1, ω ∈ [12, 1], and Y (ω) = 1 − X(ω). Then FX(x) = FY(x), but

I[X<1

2](ω) = (

1, ω ∈ [0,12), 0, ω ∈ [12, 1], and

I[Y <1

2](ω) = (

0, ω ∈ [0,12), 1, ω ∈ [12, 1].

Then I[X<x] 6= I[Y <x] for ω ∈ Ω and x ∈ [0, 1], hence these variables have the same distribution, but do not have the same conditional distribution.

Definition 1.2. We say that a sequence {Xn, n ≥ 1} of r.v. F -conditionally converges in distribution to the r.v. X if for each x ∈ ζFX

EFI[Xn<x]−→ EFI[X<x] a.s., n → ∞.

This fact will be denoted by XnF −D−→ X.

Note that if XnF −D−→ X, then Xn−→ X, since for each x ∈ ζD FX we have

n→∞lim Fn(x) = lim

n→∞EI[Xn<x]= lim

n→∞E EFI[Xn<x] = E lim

n→∞EFI[Xn<x]

= EEFI[X<x] = F (x).

The following example shows that the opposite implication is false.

(3)

Example 2. Let (Ω, A, P ) be defined as in the previous example, F = σ([0,13), [13,23), [23, 1]), for even values of n

Xn(ω) =





1

4, ω ∈ [0,13), ω, ω ∈ [13,23),

2

3, ω ∈ [23, 1], while for odd values of n

Xn(ω) =





2

3, ω ∈ [0,13), ω, ω ∈ [13,23),

1

4, ω ∈ [23, 1].

Then Xn −→ X, however for x =D 13 we have EFI[X

n<13] = EFIAn = IAn, where

An=

([0,13) for n = 2k,

[23, 1] for n = 2k + 1, k ∈ N.

Since limn→∞IAn does not exist, the sequence {Xn, n ≥ 1} is not condi- tionally convergent in distribution.

Definition 1.3. We say that a sequence {Xn, n ≥ 1} of r.v. F -conditionally converges in probability to the r.v. X if for every ε > 0

EFI[|Xn−X|>ε] −→ 0 a.s., n → ∞, which will be denoted by XnF −P−→ X.

It is easily seen that if XnF −P−→ X, then Xn−→ X, becauseP

n→∞lim P (|Xn− X| > ε) = lim

n→∞EI[|Xn−X|>ε]= lim

n→∞E EFI[|Xn−X|>ε]

= E

n→∞lim EFI[|Xn−X|>ε]

= 0.

Theorem 1.4. If Xn F −P−→ X, then for every F -measurable r.v. η > 0 a.s.

EFI[|Xn−X|>η] −→ 0 a.s., n → ∞.

Proof. Choose δ > 0. Then

EFI[|Xn−X|>η]≤ EFI[|Xn−X|>δ, η≥δ]+ EFI[|Xn−X|>η, η<δ] a.s.

Therefore

n→∞lim EFI[|Xn−X|>η] ≤ lim

n→∞EFI[|Xn−X|>δ]+ EFI[η<δ]= EFI[η<δ]−→ 0 a.s., if δ → 0. Since δ is arbitrary, we have proved the result. 

(4)

Let F and G be sub-σ-fields contained in the σ-field A and F ⊂ G. In such a case if XnG−D−→ X, then XnF −D−→ X, as for each x ∈ ζFX we have

n→∞lim EFI[Xn<x]= lim

n→∞EFEGI[Xn<x]= EF lim

n→∞EGI[Xn<x]

= EFEGI[X<x] = EFI[X<x] a.s.

Similarly, if XnG−P−→ X, then XnF −P−→ X. Indeed, for every ε > 0 we have

n→∞lim EFI[|Xn−X|>ε]= lim

n→∞EFEGI[|Xn−X|>ε]

= EF lim

n→∞EGI[|Xn−X|>ε] = 0 a.s.

The opposite implications are not true.

Example 3. Let (Ω, A, P ) be defined as in the previous examples, G = σ([0,12), [12, 1]), F = {∅, Ω}. If

Xn(ω) =

(1, ω ∈ [0,12), 0, ω ∈ [12, 1],

for even values of n and Xn(ω) = 1 − Xn−1(ω), for odd values of n then

n→∞lim EFI[Xn<x]= lim

n→∞P [Xn< x] = P [X < x] = EFI[X<x] for x ∈ ζFX. However EGI[Xn<1

2]= EGIAn= IAn, where An(ω) =

([0,12) for n = 2k + 1, [12, 1] for n = 2k, k ∈ N and limn→∞IAn does not exist.

Example 4. Let (Ω, A, P ) be defined as in the previous example. Every integer can be written in the form n = 2k+ s, where k = max{l : 2l≤ n}, s = 0, 1, ..., 2k− 1. If G = B, F = {∅, Ω} and

X2k+s(ω) =

(1, ω ∈ [2sk,s+12k ), 0, otherwise, then for every ε > 0,

EFI[|Xn|>ε]= P [|Xn| > ε] = 1

2k −→ 0, n → ∞.

However for every ε > 0, EGI[|Xn|>ε] = I[|Xn|>ε]. Thus for ε = 12 we have lim supn→∞I[|Xn|>ε] = 1 a.s. but lim infn→∞I[|Xn|>ε] = 0 a.s. Hence it is not true that XnG−P−→ X, but XnF −P−→ X.

(5)

For F = A convergence Xn F −P−→ X implies a.s. convergence. Indeed, EFI[|Xn−X|>ε] = I[|Xn−X|>ε] a.s. and

lim sup

n→∞

I[|Xn−X|>ε] = lim

n→∞sup

k≥n

I[|Xk−X|>ε]= lim

n→∞IS

k=n[|Xk−X|>ε]= 0 a.s.

Hence

n→∞lim P

[

k=n

[|Xk− X| > ε]

!

= 0,

which is equivalent to a.s. convergence. On the other hand, the last state- ment implies the previous one, as can be easily seen. Thus a continuous link between convergence in probability and a.s. convergence is established.

2. Main results.

Theorem 2.1. If XnF −P−→ X, then for each x ∈ ζFX EF

I[Xn<x]− I[X<x]

−→ 0 a.s., n → ∞.

Proof. If XnF −P−→ X, then for every ε > 0, limn→∞EFI[|Xn−X|>ε]= 0 a.s.

Moreover EF

I[Xn<x]− I[X<x]

= EFI([Xn<x]4[X<x])

= EFI([Xn<x]4[X<x])I[|Xn−X|≥ε]+ EFI([Xn<x]4[X<x])I[|Xn−X|<ε]

= EFI([Xn<x]4[X<x])I[|Xn−X|≥ε]+ EFI([Xn<x]4[X<x])I[X−ε<Xn<X+ε]

≤ EFI[|Xn−X|≥ε]+ EFI([Xn<x]\[X<x])I[X−ε<Xn<X+ε]

+ EFI([X<x]\[Xn<x])I[X−ε<Xn<X+ε]

≤ EFI[|Xn−X|≥ε]+ EFI[x≤X<x+ε]+ EFI[x−ε≤X<x]

= EFI[|Xn−X|≥ε]+ FXF(x + ε) − FXF(x) + FXF(x) − FXF(x − ε) a.s.

Thus for every ε > 0 lim sup

n→∞

EF

I[Xn<x]− I[X<x]

≤ FXF(x + ε) − FXF(x − ε) a.s.

Since ε is arbitrary, we have lim supn→∞EF

I[Xn<x]− I[X<x]

= 0 a.s. and hence limn→∞EF

I[Xn<x]− I[X<x]

= 0 a.s. for x ∈ ζFX.  Corollary 2.2. If Xn

F −P−→ X, then XnF −D−→ X.

(6)

Theorem 2.3. limn→∞EFI([Xn<x]4[X<x]) = 0 a.s., x ∈ ζFX if and only if for every D ∈ A and each x ∈ ζFX

n→∞lim EFI[Xn<x]ID = EFI[X<x]ID a.s.

Proof. Necessity. If limn→∞EFI([Xn<x]4[X<x]) = 0 a.s. then by the previ- ous theorem for every x ∈ ζFX we have

EFI[Xn<x]ID − EFI[X<x]ID

=

EF I[Xn<x]− I[X<x] ID

≤ EF

I[Xn<x]− I[X<x]

= EFI([Xn<x]4[X<x]) −→ 0 a.s., n → ∞.

Sufficiency. If D = [X < x] and x ∈ ζFX, then limn→∞EFI[Xn<x]I[X<x] = EFI[X<x]. Hence

n→∞lim EFI([Xn<x]4[X<x])

= lim

n→∞EF(I[Xn<x]− I[Xn<x][X<x]+ I[X<x]− I[Xn<x][X<x])

= lim

n→∞EFI[Xn<x]− lim

n→∞EFI[Xn<x][X<x]

+ EFI[X<x]− lim

n→∞EFI[Xn<x][X<x]= 0 a.s.

 Note that limn→∞EFI([Xn<x]4[X<x]) = 0 a.s. for x ∈ ζFX is equivalent to limn→∞EFI([Xn≤x]4[X≤x]) = 0 a.s. for x ∈ ζFX.

Theorem 2.4. If limn→∞EFI([Xn<x]4[X<x]) = 0 a.s. for each x ∈ ζFX, then XnF −P−→ X.

Proof. Let B(x, r) = {y : |x−y| < r} and ε > 0. Let {xi}i=1,2...be a count- able dense subset of R. Select γ such that 0 < γ < ε2 and P [|X −xi| = γ] = 0.

Then EFI[|X−xi|=γ,i=1,2,... ] = 0 a.s. It is clear thatS

i=1B(xi, γ) = R, there- fore by continuity of measure P , we have limt→∞PX ∈ Sts=1B(xs, γ) = 1, hence

t→∞lim EFI[X∈Sts=1B(xs,γ)] = 1 a.s.

For 0 < δ < 1 we define a random variable N (ω) = infn

t : EFI[X∈St

s=1B(xs,γ)] > 1 − δ o

. Note that An= [N (ω) = n] ∈ F and P [S

n=1An] = 1, i.e. N < ∞ a.s.

(7)

Moreover, if Kt=St

s=1B(xs, γ), then

EFI[|Xk−X|>ε]= EFI[|Xk−X|>ε,S

n=1An]=

X

n=1

EFI[|Xk−X|>ε]IAn

=

X

n=1

EFI[X /∈KN,|Xk−X|>ε]IAn+

X

n=1

EFI[X∈KN,|Xk−X|>ε]IAn

X

n=1

EFI[X /∈KN]IAn+

X

n=1

EFI[X∈KN,|Xk−X|>ε]IAn

≤ δ +

X

n=1

EFI(SNs=1[X∈B(xs,γ)]∩[|Xk−X|>ε])IAn .

Thus, since

N

[

s=1

[X ∈ B(xs, γ)] ∩ [|Xk− X| > ε]

N

[

s=1

[xs− γ < X < xs+ γ] ∩ [|Xk− X| > ε]

N

[

s=1

[xs− γ < X < xs+ γ] ∩ [xs− γ < Xk < xs+ γ]C

N

[

s=1

[xs− γ < X < xs+ γ] ∩ [Xk< xs+ γ]C

!

N

[

s=1

[xs− γ < X < xs+ γ] ∩ [xs− γ < Xk]C

!

N

[

s=1

[X < xs+ γ] ∩ [Xk< xs+ γ]C

!

N

[

s=1

[xs− γ < X] ∩ [xs− γ < Xk]C

!

N

[

s=1

[X < xs+ γ] ∩ [Xk< xs+ γ]C

!

N

[

s=1

[Xk≤ xs− γ] ∩ [X ≤ xs− γ]C

! ,

(8)

we have

EFI[|Xk−X|>ε]

≤ δ +

X

n=1

EF

N

X

s=1

I([X<xs+γ]4[Xk<xs+γ])IAn

!

+

X

n=1

EF

N

X

s=1

I([Xk≤xs−γ]4[X≤xs−γ])IAn

! a.s.

Hence lim sup

k→∞

EFI[|Xk−X|>ε]

≤ δ + lim

k→∞

X

n=1

EF

N

X

s=1

I([X<xs+γ]4[Xk<xs+γ])IAn

+

N

X

s=1

I([Xk≤xs−γ]4[X≤xs−γ])IAn

!

= δ +

X

n=1 n

X

s=1

k→∞lim IAnEFI([X<xs+γ]4[Xk<xs+γ])

+

X

n=1 n

X

s=1

k→∞lim IAnEFI([Xk≤xs−γ]4[X≤xs−γ])= δ a.s.

Since δ is arbitrary, we have proved the result.  Theorem 2.5. Let X be F -measurable random variable. If Xn

F −D−→ X, then XnF −P−→ X.

Proof. If Xn

F −D−→ X, then for each x ∈ ζFX we have limn→∞EFI[Xn<x]= EFI[X<x] = I[X<x] a.s. and limn→∞EFI[Xn<x]I[X<x]= I[X<x]. Hence

n→∞lim EFI([Xn<x]4[X<x]) = lim

n→∞EF(I[Xn<x]− I[Xn<x]I[X<x]) + lim

n→∞EF(I[X<x]− I[Xn<x]I[X<x]) = 0 a.s.

Therefore by the previous theorem we have that XnF −P−→ X.  Note that if F is the trivial σ-field (F = {∅, Ω}), then we obtain the following well-known result.

Corollary 2.6. Let C be a random variable such that P [C = c] = 1 for some c ∈ R. Then Xn

−→ C if and only if XD n−→ C.P

Acknowledgments. We thank the referee for valuable comments.

(9)

References

[1] Billingsley, P., Convergence of Probability Measures, John Wiley & Sons, Inc., New York–London–Sydney, 1968.

[2] Fernandez, P., A note on convergence in probability, Bol. Soc. Brasil. Mat. 3 (1972), 13–16.

[3] Majerek D., W. Nowak and W. Zięba, On uniform integrability of random variables, Stat. Probab. Lett. 74 (2005), 272–280.

[4] Neveu, J., Discrete-Parameter Martingales, North-Holland Publishing Company, Amsterdam–Oxford, 1975.

[5] Padmanabhan, A.R., Convergence in probability and allied results, Math. Japon. 15 (1970), 111–117.

[6] Zięba, W., On the L1F convergence for conditional amarts, J. Multivariate Anal. 26 (1988), 104–110.

Wioletta Nowak Wiesław Zięba

Department of Mathematics Institute of Mathematics

Lublin University of Technology Maria Curie-Skłodowska University ul. Nadbystrzycka 38 pl. Marii Curie-Skłodowskiej 1 20-618 Lublin, Poland 20-031 Lublin, Poland

e-mail: wnowak@antenor.pol.lublin.pl e-mail: zieba@golem.umcs.lublin.pl Received May 11, 2005

Cytaty

Powiązane dokumenty

Sample spaces and basic properties of probability –

An equivalent setting is that the Trotter-Kato theorem concerns strongly continuous semigroups, yet it may happen that strongly continuous semigroups approximate (in the

A sequence {Xn,n &gt; 1} of random elements Xn G X vaguely converges to a random element X G X, (Xn X, n —* oo) if the sequence {PXn, n &gt; 1} of generalized probability

Siebert’s theorem on convergence of continuous convolution semigroups and generating functionals on Lie-projective groups was proved in the context of commuting triangular arrays

In this paper we establish an estimation for the rate of pointwise convergence of the Chlodovsky-Kantorovich polynomials for functions f locally integrable on the interval [0, ∞).

ROCZNIKI POLSKIEGO TOWARZYSTWA MATEMATYCZNEGO Seria 1: PRACE MATEMATYCZNE X II (1969) ANNALES SOCIETATIS MATHEMATICAE POLONAE Series 1: СОММЕ NT ATIONES MATHEMATICAE

The point is that in some Lusin spaces the direct Prohorov’s theorem is not the proper tool for proving limit theorems based on the weak-∗ convergence of probability measures

In this paper we study approximation properties of partial modi- fied Szasz-Mirakyan operators for functions from exponential weight spaces.. We present some direct theorems giving