• Nie Znaleziono Wyników

Two-sided bounds for L p -norms of combinations of products of independent random variables

N/A
N/A
Protected

Academic year: 2021

Share "Two-sided bounds for L p -norms of combinations of products of independent random variables"

Copied!
29
0
0

Pełen tekst

(1)

Two-sided bounds for L p -norms of combinations of products of independent random variables

Ewa Damek

, Rafa l Lata la

, Piotr Nayar and Tomasz Tkocz April 2, 2014

Abstract

We show that for every positive p, the L

p

-norm of linear combinations (with scalar or vector coefficients) of products of i.i.d. nonnegative random variables with the p- norm one is comparable to the l

p

-norm of the coefficients and the constants are explicit.

As a result the same holds for linear combinations of Riesz products.

We also establish the upper and lower bounds of the L

p

-moments of partial sums of perpetuities.

Key words and phrases: estimation of moments, product of independent random variables, Riesz’s product, stochastic difference equation, perpetuity.

AMS 2010 Subject classification: Primary 60E15, Secondary 60H25.

1 Introduction and Main Results

Let X, X

1

, X

2

, . . . be i.i.d. nondegenerate nonnegative r.v.’s with finite mean. Define R

0

:= 1 and R

i

:=

i

Y

j=1

X

j

for i = 1, 2, . . . . (1) Then obviously for any vectors v

0

, v

1

, . . . , v

n

in a normed space (F, k k), Ek P

n

i=0

v

i

R

i

k ≤ P

n

i=0

kv

i

kER

i

. In [17] it was shown that the opposite inequality holds, i.e.

E

n

X

i=0

v

i

R

i

≥ c

X

n

X

i=0

kv

i

kER

i

,

where c

X

is a constant, which depends only on the distribution of X.

In this paper we present similar estimates for L

p

-norms. Our main result is the follow- ing.

Research supported by the NCN grant DEC-2012/05/B/ST1/00692 and by Warsaw Center of Mathe- matics and Computer Science .

Research supported by the NCN grant DEC-2012/05/B/ST1/00412.

(2)

Theorem 1. Let p > 0 and X, X

1

, X

2

, . . . be i.i.d. nondegenerate nonnegative r.v.’s such that EX

p

< ∞ and let R

i

be defined by (1). Then there exist constants 0 < c

p,X

≤ C

p,X

<

∞ which depend only on p and the distribution of X such that for any vectors v

0

, v

1

, . . . , v

n

in a normed space (F, k k),

c

p,X

n

X

i=0

kv

i

k

p

ER

ip

≤ E

n

X

i=0

v

i

R

i

p

≤ C

p,X

n

X

i=0

kv

i

k

p

ER

pi

.

In fact we prove a more general result that does not require the identical distribution assumption. Namely, suppose that

X

1

, X

2

, . . . are independent, nonnegative r.v.’s such that EX

ip

< ∞. (2) Further assumptions depend on whether p ≤ 1. For p ∈ (0, 1] we assume that

λ<1

i

EX

ip/2

≤ λ(EX

ip

)

1/2

(3) and

0<δ<1,A>1

i

E(X

ip

− EX

ip

) 1

{EXip≤Xip≤AEXip}

≥ δEX

ip

. (4) Theorem 2. Let 0 < p ≤ 1 and X

1

, X

2

, . . . satisfy assumptions (2), (3) and (4). Then for any vectors v

0

, v

1

, . . . , v

n

in a normed space (F, k k) we have

c(p, λ, δ, A)

n

X

i=0

kv

i

k

p

ER

ip

≤ E

n

X

i=0

v

i

R

i

p

n

X

i=0

kv

i

k

p

ER

pi

, where c(p, λ, δ, A) is a constant which depends only on p, λ, δ and A.

For p > 1 to obtain the lower bound we assume that

µ>0,A<∞

i

E|X

i

− EX

i

| ≥ µ(EX

ip

)

1/p

and E|X

i

− EX

i

| 1

{Xi>A(EXp

i)1/p}

≤ 1

4 µ(EX

ip

)

1/p

(5) and

q>max{p−1,1}

λ<1

i

(EX

iq

)

1/q

≤ λ(EX

ip

)

1/p

. (6) For the upper bound we need the condition

k=1,2,...,dpe−1

λk<1

i

(EX

ip−k

)

1/(p−k)

≤ λ

k

(EX

ip−k+1

)

1/(p−k+1)

. (7) Theorem 3. Let p > 1 and X

1

, X

2

, . . . satisfy assumptions (2), (5), (6) and (7). Then for any vectors v

0

, v

1

, . . . , v

n

in a normed space (F, k k) we have

c(p, µ, A, q, λ)

n

X

i=0

kv

i

k

p

ER

pi

≤ E

n

X

i=0

v

i

R

i

p

≤ C(p, λ

1

, . . . , λ

dpe−1

)

n

X

i=0

kv

i

k

p

ER

pi

,

where c(p, µ, A, q, λ) is a positive constant which depends only on p, µ, A, q and λ and

C(p, λ

1

, . . . , λ

dpe−1

) is a constant which depends only on p, λ

1

, . . . , λ

dpe−1

.

(3)

Remark. Proofs presented below show that Theorem 2 holds with c(p, λ, δ, A) = δ

3

16k , where k is an integer such that kλ

2k−2

≤ δ

3

(1 − λ)

2

2

12

A . In Theorem 3 we can take

C(p, λ

1

, . . . , λ

dpe−1

) = 2

p(p+1)2

Y

1≤j≤dpe−1

1 1 − λ

p−jj

and

c(p, µ, A, q, λ) = µ

3p

8k · 2

10p

· 3

p

, where k is an integer such that kλ

pk

≤ (1 − λ)µ

3p

8C

0

· 2

10p

· 3

p

, C

0

= (1 − λ)

1−p

 2A



p



2p (q + 1 − p) ln 2



pq

48

2p2 min{p−1,1}

.

Theorem 1 yields by conditioning a similar result for products of symmetric r.v.’s.

Corollary 4. Let p > 0 and X, X

1

, X

2

, . . . be an i.i.d. sequence of symmetric r.v.’s such that E|X|

p

< ∞ and P(|X| = t) < 1 for all t. Then there exist constants 0 < c

p,X

≤ C

p,X

< ∞ which depend only on p and the distribution of X such that for any vectors v

0

, v

1

, . . . , v

n

in a normed space (F, k k),

c

p,X

n

X

i=0

kv

i

k

p

E|R

i

|

p

≤ E

n

X

i=0

v

i

R

i

p

≤ C

p,X

n

X

i=0

kv

i

k

p

E|R

i

|

p

.

Proof. Let (ε

i

) be a sequence of independent symmetric ±1 r.v.’s, independent of (X

i

).

Then

E

n

X

i=0

v

i

R

i

p

= E

n

X

i=0

v

i

i

Y

k=1

ε

k

i

Y

k=1

|X

k

|

p

and it is enough to use Theorem 1 for variables (|X

i

|).

Another consequence of Theorem 1 is an estimate for L

p

-norms of linear combinations of Riesz products. Let T = R/2πZ be the one dimensional torus and m be the normalized Haar measure on T. Riesz products are defined on T by the formula

R ¯

i

(t) =

i

Y

j=1

(1 + cos(n

j

t)), i = 1, 2, . . . , (8) where (n

k

)

k≥1

is a lacunary increasing sequence of positive integers.

It is well known that if coefficients n

k

grow sufficiently fast then k P

n

i=0

a

i

R ¯

i

k

Lp(T)

∼ (E| P

n

i=0

a

i

R

i

|

p

)

1/p

for p ≥ 1, where R

i

are products of independent random variables distributed as ¯ R

1

. Together with Theorem 1 this gives an estimate for k P

n

i=0

a

i

R ¯

i

k

Lp(T)

.

Here is the more quantitative result.

(4)

Corollary 5. Suppose that (n

k

)

k≥1

is an increasing sequence of positive integers such that n

k+1

/n

k

≥ 3 and P

k=1 nk

nk+1

< ∞. Then for any coefficients a

0

, a

1

, . . . , a

n

∈ R and p ≥ 1, c

p

n

X

i=0

|a

i

|

p

Z

T

| ¯ R

i

(t)|

p

dm(t) ≤ Z

T

n

X

i=0

a

i

R ¯

i

(t)

p

dm(t) ≤ C

p n

X

i=0

|a

i

|

p

Z

T

| ¯ R

i

(t)|

p

dm(t), (9) where 0 < c

p

≤ C

p

< ∞ are constants depending only on p and the sequence (n

k

).

Proof. Let X

1

, X

2

, . . . be independent random variables distributed as 1 + cos(Y ), where Y is uniformly distributed on [0, 2π] and R

i

be as in (1). By the result of Y. Meyer [18], k P

n

i=0

a

i

R ¯

i

k

Lp

∼ (E| P

n

i=0

a

i

R

i

|

p

)

1/p

(in particular also k ¯ R

i

k

Lp

∼ (ER

pi

)

1/p

) and the estimate follows by Theorem 1.

Theorem 1 has also an immediate application to the stationary solution S of the random difference equation

S = XS + B, (10)

where the equality is meant in law, (X, B) is a random variable with values in [0, ∞) × R

d

independent of S such that for some p > 0,

EX

p

= 1, EkBk

p

< ∞ and P(X = 1) < 1. (GK1) Over the last 40 years equation (10) and its various modifications have attracted a lot of attention [1, 2, 3, 5, 8, 11, 12, 13, 14, 15, 16, 19, 20]. It has a rather wide spectrum of applications including random walks in random environment, branching processes, frac- tals, finance, insurance, telecommunications, various physical and biological models. In particular, the tail behaviour of S is of interest.

It is well known that in law

S =

X

i=1

R

i−1

B

i

,

where R

i−1

= X

1

· · · X

i−1

, R

0

= 1 and (X

i

, B

i

) is an i.i.d sequence of r.v.’s with the same distribution as (X, B). Under the additional assumption that

log X conditioned on {X 6= 0} is non lattice and EX

p

log

+

X < ∞, (GK2) S has a heavy tail behaviour, i.e. the limit

t→∞

lim t

p

P(kSk > t) = c

(X, B)

exists and c

(X, B) is strictly positive provided that P(Xv + B = v) < 1 for every v ∈ R

d

.

If P(Xv +B = v) = 1 then S

n

= v − R

n−1

v → v = S. Assumptions (GK1), (GK2) together

(5)

with P(Xv + B = v) < 1 will be later on referred to as the Goldie-Kesten conditions. Let S

n

=

n

X

i=1

R

i−1

B

i

.

It turns out that the sequence EkS

n

k

p

is closely related to c

(X, B). Recently, it has been proved in [6] that under the Goldie-Kesten conditions plus a little bit stronger moment assumption E(X

p+ε

+ kBk

p+ε

) < ∞ for some ε > 0, we have

n→∞

lim 1

npρ EkS

n

k

p

= c

(X, B) > 0, where ρ := EX

p

log X.

Now suppose that X, B are independent. Then Theorem 1 implies that for every n c

p,X

EkBk

p

≤ 1

n EkS

n

k

p

≤ C

p,X

EkBk

p

, (11) which gives uniform bounds on the Goldie constant c

(X, B) depending only on the law of X and EkBk

p

and independent of the dimension. Moreover, in some particular cases when constants λ, δ, µ, q, λ

k

in (3)–(7) can be estimated more carefully, (11) may give some information about the size of the Goldie constant which is of some value, especially in the situation when none of the existing formulae for it is satisfactory enough (see [7, 10, 6, 4]).

We can go even further. With a slight modification of the proof we can get rid of independence of X, B and obtain the following theorem.

Theorem 6. Suppose that F is a separable Banach space. Let p > 0 and let an i.i.d.

sequence (X, B), (X

1

, B

1

), ... with values in [0, ∞) × F be such that X is nondegenerate and EkBk

p

, EX

p

< ∞. Assume additionally that

P(Xv + B = v) < 1 for every v ∈ F. (12) Then there are constants 0 < c

p

(X, B) ≤ C

p

(X, B) < ∞ which depend on p and the distribution of (X, B) such that for every n,

c

p

(X, B)EkBk

p

n

X

i=1

ER

i−1p

≤ E

n

X

i=1

R

i−1

B

i

p

≤ C

p

(X, B)EkBk

p

n

X

i=1

ER

pi−1

. (13)

Theorem 6 specified to our situation with EX

p

= 1 says c

p

(X, B)EkBk

p

≤ 1

n EkS

n

k

p

≤ C

p

(X, B)EkBk

p

, (14)

This gives an estimate for the Goldie constant but now with c

p

(X, B), C

p

(X, B) depending

on the law of (X, B). Again, in particular cases, a careful examination of the constants

(6)

involved in the proof may give a more satisfactory answer. Also, in view of Theorem 6, it would be worth relaxing the assumptions of [6].

The paper is organized as follows. In Section 2 and 3 we derive lower bounds in Theorems 2 and 3. Then in Section 4 we establish upper bounds in both theorems. We conclude in Section 5 with a discussion of the proof of Theorem 6.

2 Lower bound - p > 1

In this section we will show the lower bound in Theorem 3. Since it is only a matter of normalization we will assume that

X

1

, X

2

, . . . are independent, nonnegative r.v.’s such that EX

ip

= 1. (15) In particular this implies that ER

pi

= 1 for all i.

We also set for k = 1, 2, . . .

R

k,k−1

≡ 1 and R

k,i

:=

i

Y

j=k

X

i

for i ≥ k.

Observe that R

i

= R

k

R

k+1,i

for i ≥ k ≥ 0.

We begin with several lemmas.

Lemma 7. Suppose that a nonnegative random variable X satisfies E|X − EX| ≥ µ and E|X − EX|1

{X>A}

14

µ. Then for all p ≥ 1 and u, v ∈ (F, k k) we have

EkuX + vk

p

≥ EkuX + vk

p

1

{X≤A}

≥ µ

p

8

p

min

 1, 1

(EX)

p



max{kuk

p

, kvk

p

}.

Proof. Let Y has the same distribution as X conditioned on the set {X ≤ A}. Let us define t := EY ≤ EX. Clearly, E(X − EX)

+

= E(X − EX)

12

µ. Therefore,

E|X − t|1

{X≤A}

≥ E(X − t)

+

1

{X≤A}

≥ E(X − EX)

+

1

{X≤A}

= E(X − EX)

+

− E(X − EX)

+

1

{X>A}

≥ 1

2 µ − E|X − EX|1

{X>A}

≥ 1 2 µ − 1

4 µ = 1 4 µ.

We obtain

EkuY + vk = 1

t Ekv(t − Y ) + (tu + v)Y k ≥ 1

t kvkE|Y − t| − ktu + vk.

Since EkuY + vk ≥ kuEY + vk = ktu + vk, we have EkuY + vk ≥ 1

2t kvkE|Y − t| = kvk

2tP(X ≤ A) E|X − t|1

{X≤A}

≥ µ 8t

kvk

P(X ≤ A) .

(7)

We arrive at

EkuX + vk

p

1

{X≤A}

≥ EkuX + vk1

{X≤A}



p

= (EkuY + vkP(X ≤ A))

p

≥ µ

p

8

p

t

p

kvk

p

≥ µ

p

8

p

(EX)

p

kvk

p

. We also have

EkuY + vk = Eku(Y − t) + tu + vk ≥ kukE|Y − t| − ktu + vk.

Therefore

EkuY + vk ≥ kuk

2 E|Y − t| ≥ µ 8

kuk P(X ≤ A) and as before we get that EkuX + vk

p

1

{X≤A}

µ8pp

kuk

p

.

Lemma 8. Assume that (15) and (5) hold. Then for any v

0

, v

1

, . . . , v

n

∈ (F, k k) we have

E

n

X

i=0

v

i

R

i

p

≥ µ

2p

64

p

max

1≤i≤n

kv

i

k

p

≥ µ

2p

64

p

· 1

n

n

X

i=1

kv

i

k

p

.

Proof. For 1 ≤ j ≤ n we have P

n

i=0

v

i

R

i

= Y + X

j

(v

j

R

j−1

+ X

j+1

Z), where Y and Z are independent of X

j

and X

j+1

. Observe that EX

j

≤ 1 and EX

j+1

≤ 1. Thus, using Lemma 7 twice, we obtain

E

n

X

i=0

v

i

R

i

p

≥ µ

p

8

p

Ekv

j

R

j−1

+ X

j+1

Zk

p

≥ µ

2p

64

p

kv

j

k

p

E|R

j−1

|

p

= µ

2p

64

p

kv

j

k

p

.

Lemma 9. Assume that (15) holds and there exist q > 1 and 0 < λ < 1 such that for all i, (EX

iq

)

1/q

≤ λ. Then for any v

0

, v

1

, . . . , v

n

∈ (F, k k) and t > 0,

P

n

X

i=0

v

i

R

i

p

≥ t

n

X

i=0

λ

i

kv

i

k

p

!

≤ (1 − λ)

(1−p)qp

t

q p

. Proof. Using Minkowski’s and H¨ older’s inequalities we obtain

E

n

X

i=0

v

i

R

i

q

!

1q

n

X

i=0

(Ekv

i

R

i

k

q

)

1q

n

X

i=0

kv

i

i

=

n

X

i=0

kv

i

pi

λ

p−1 p i

n

X

k=0

kv

i

k

p

λ

i

!

1p n

X

i=0

λ

i

!

p−1p

.

(8)

Thus,

E

n

X

i=0

v

i

R

i

q

n

X

i=0

kv

i

k

p

λ

i

!

q

p

(1 − λ)

(p−1)q p

. By Chebyshev’s inequality we get

P

n

X

i=0

v

i

R

i

q

≥ t

qp

n

X

i=0

λ

i

kv

i

k

p

!

qp

 ≤ (1 − λ)

(1−p)qp

t

qp

.

Lemma 10. Let Y, Z be random vectors with values in a normed space F and let p ≥ 1.

Suppose that EkY k

p−1

kZk ≤ γEkZk

p

. Then EkY + Zk

p

≥ EkY k

p

+  1

3

p

− 2pγ



EkZk

p

.

Proof. For any real numbers a, b we have |a + b|

p

≥ |a|

p

− p|a|

p−1

|b|. If, additionally,

|a| ≤

13

|b|, then |a + b|

p

≥ |a|

p

+

31p

|b|

p

. Taking a = kY k, b = −kZk and using the inequality kY + Zk ≥ |kY k − kZk| we obtain

EkY + Zk

p

= EkY + Zk

p

1

{kY k≤1

3kZk}

+ EkY + Zk

p

1

{kY k>1

3kZk}

≥ EkY k

p

1

{kY k≤1

3kZk}

+ 1

3

p

EkZk

p

1

{kY k≤1

3kZk}

+ EkY k

p

1

{kY k>1

3kZk}

− pEkY k

p−1

kZk 1

{kY k>1

3kZk}

= EkY k

p

+ 1

3

p

EkZk

p

(1 − 1

{kY k>1

3kZk}

) − pEkY k

p−1

kZk 1

{kY k>1

3kZk}

. Note that

E  1

3

p

kZk

p

+ pEkY k

p−1

kZk



1

{kY k>1

3kZk}

≤  1 3 + p



EkY k

p−1

kZk ≤ 2pγEkZk

p

. Therefore,

EkY + Zk

p

≥ EkY k

p

+ 1

3

p

EkZk

p

− 2pγEkZk

p

.

We are now able to state the key proposition which will easily yield the lower bound in

Theorem 3.

(9)

Proposition 11. Let p > 1 and r.v.’s X

1

, X

2

, . . . satisfy assumptions (15), (5) and (6).

Then there exist constants ε

0

, ε

1

, C

0

> 0 depending only on p, µ, A, q and λ such that for any vectors v

0

, v

1

, . . . , v

n

in a normed space (F, k k) and k ≥ 1 we have

E

n

X

i=0

v

i

R

i

p

≥ ε

0

kv

0

k

p

+

n

X

i=1

 ε

1

k − c

i



kv

i

k

p

, (16)

where

c

i

= 0 for 1 ≤ i ≤ k − 1, c

i

= Φ

i

X

j=k

λ

j

for i ≥ k and Φ = C

0

λ

(p−1)k

.

Proof. Define

ε

0

:= min

 1

4 · 3

p

, µ

p

8 · 24

p



, ε

1

:= min  µ

p

8

p

, µ

2p

2

p−1

64

p

 ε

0

,

the value of C

0

will be chosen later. In the proof by ε

2

, C

2

, C

3

we will denote finite nonnegative constants that depend only on parameters p, µ.A, q and λ.

We fix k ≥ 1 and prove (16) by induction on n. From Lemma 7 and Lemma 8 we obtain E

n

X

i=0

v

i

R

i

p

≥ 2ε

0

kv

0

k

p

, E

n

X

i=0

v

i

R

i

p

≥ 2ε

1

n

n

X

i=1

kv

i

k

p

. Therefore for n ≤ k we have

E

n

X

i=0

v

i

R

i

p

≥ ε

0

kv

0

k

p

+ ε

1

k

n

X

i=1

kv

i

k

p

.

Suppose that the induction assertion holds for n ≥ k. We show it for n + 1. To this end we consider two cases.

Case 1. ε

0

kv

0

k

p

≤ Φ P

n+1

i=k

λ

i

kv

i

k

p

.

Applying the induction assumption conditionally on X

1

we obtain E

n+1

X

i=0

v

i

R

i

p

≥ ε

0

Ekv

0

+ v

1

X

1

k

p

+

n+1

X

i=2

 ε

1

k − c

i−1



EkX

1

v

i

k

p

≥ ε

1

k kv

1

k

p

+

n+1

X

i=2

 ε

1

k − c

i−1

 kv

i

k

p

≥ ε

0

kv

0

k

p

− Φ

n+1

X

i=k

λ

i

kv

i

k

p

+ ε

1

k kv

1

k

p

+

n+1

X

i=2

 ε

1

k − c

i−1

 kv

i

k

p

= ε

0

kv

0

k

p

+

n+1

X

i=1

 ε

1

k − c

i



kv

i

k

p

,

(10)

where the second inequality follows from Lemma 7.

Case 2. ε

0

kv

0

k

p

> Φ P

n+1

i=k

λ

i

kv

i

k

p

.

Define the event A

k

∈ σ(X

1

, . . . , X

k

) by

A

k

:= {X

1

≤ A, R

2,k

≤ 2

1q

λ

k−1

}.

By the induction assumption used conditionally on X

1

, . . . , X

k

we have

E

n+1

X

i=0

v

i

R

i

p

1

Ω\Ak

≥ ε

0

E

k

X

i=0

v

i

R

i

p

1

Ω\Ak

+

n+1

X

i=k+1

 ε

1

k − c

i−k



Ekv

i

R

k

k

p

1

Ω\Ak

. (17) We have by Chebyshev’s inequality and (6),

P



R

2,k

≥ 2

1q

λ

k−1



≤ ER

2,kq

(k−1)q

≤ 1

2 . (18)

Together with (5) it implies P(A

k

) > 0. Let (Y, Y

0

, Z) have the same distribution as the random vector ( P

n+1

i=k

v

i

R

i

, P

n+1

i=k

v

i

R

k+1,i

, P

k−1

i=0

v

i

R

i

) conditioned on the event A

k

. Note that

E

n+1

X

i=0

v

i

R

i

p

1

Ak

= P(A

k

)EkY + Zk

p

. Applying Lemma 7 conditionally we obtain

EkZk

p

= 1 P(A

k

) E

k−1

X

i=0

v

i

R

i

p

1

{X1≤A}

1

{R2,k≤21qλk−1}

≥ µ

p

8

p

kv

0

k

p

P(R

2,k

≤ 2

1q

λ

k−1

) P(A

k

) = µ

p

8

p

kv

0

k

p

1

P(X

1

≤ A) ≥ µ

p

8

p

kv

0

k

p

. (19) Note that Y

0

has the same distribution as P

n+1

i=k

v

i

R

k+1,i

and is independent of Z. We have for t > 0,

P(kY k

p

≥ tEkZk

p

) ≤ P



A

p

λ

p(k−1)

2

p

q

kY

0

k

p

≥ t µ

p

8

p

kv

0

k

p



≤ P A

p

λ

p(k−1)

2

p

q

kY

0

k

p

≥ t µ

p

8

p

Φ ε

0

n+1

X

i=k

λ

i

kv

i

k

p

!

= P kY

0

k

p

≥ tC

0

ε

2

n+1

X

i=k

λ

i−k

kv

i

k

p

!

≤ C

1

(tC

0

)

qp

, (20)

where the last inequality follows by Lemma 9 (recall that ε

2

and C

1

denote constants

depending on p, µ, A, q and λ).

(11)

In order to use Lemma 10 we would like to estimate EkY k

p−1

kZk. To this end take δ > 0 and observe first that

EkY k

p−1

kZk ≤ EkY k

p−1

kZk 1

{kY kp≤δEkZkp}

+ EkY k

p−1

kZk 1

{kZkp≤δEkZkp}

+ EkY k

p−1

kZk 1

{kY kp>δEkZkp}

1

{kZkp>δEkZkp}

. (21) Clearly,

EkY k

p−1

kZk 1

{kY kp≤δEkZkp}

≤ δ

p−1p

(EkZk

p

)

p−1

p

EkZk ≤ δ

p−1

p

EkZk

p

. (22)

To estimate the next term in (21) note that

EkY k

p−1

kZk 1

{kZkp≤δEkZkp}

≤ δ

1/p

(EkZk

p

)

1/p

EkY k

p−1

. Using estimate (20) we obtain

EkY k

p−1

= (EkZk

p

)

p−1p

Z

0

P



kY k

p

≥ s

p−1p

EkZk

p

 ds

≤ (EkZk

p

)

p−1 p

Z

∞ 0

min{1, C

1

C

q p

0

s

q

p−1

} ds ≤ (EkZk

p

)

p−1 p



1 + C

2

C

q p

0

 , where the last inequality follows since q > p − 1. Thus,

EkY k

p−1

kZk 1

{kZkp≤δEkZkp}

≤ δ

1/p



1 + C

2

C

q p

0



EkZk

p

. (23)

We are left with estimating the last term in (21). We have EkY k

p−1

kZk 1

{kY kp>δEkZkp}

1

{kZkp>δEkZkp}

=

X

m=0

EkY k

p−1

kZk 1

{2mδEkZkp<kY kp≤2m+1δEkZkp}

1

{kZkp>δEkZkp}

X

m=0

2

(m+1)

p−1 p

δ

p−1

p

E(EkZk

p

)

p−1

p

kZk 1

{2mδEkZkp<kY kp}

1

{kZkp>δEkZkp}

≤ δ

p−1p

X

m=0

2

(m+1)

p−1

p

E  kZk

p

δ



p−1p

kZk 1

{2mδEkZkp<kY kp}

=

X

m=0

2

(m+1)p−1p

EkZk

p

1

{2mδEkZkp<kY kp}

.

Recall that Z and Y

0

are independent. Therefore as in (20) we get EkZk

p

1

{2mδEkZkp<kY kp}

≤ EkZk

p

1

{kY0kp≥2mδC0ε2Pn+1

i=k λi−kkvikp}

≤ EkZk

p

P kY

0

k

p

≥ 2

m

δC

0

ε

2

n+1

X

i=k

λ

i−k

kv

i

k

p

!

≤ EkZk

p

C

1

(2

m

δC

0

)

q p

.

(12)

We arrive at

EkY k

p−1

kZk 1

{kY kp>δEkZkp}

1

{kZkp>δEkZkp}

≤ EkZk

p

C

1

(δC

0

)

q p

X

m=0

2

(m+1)

p−1 p

2

mq p

≤ EkZk

p

C

3

(δC

0

)

qp

, (24) where we have used the fact that q > p − 1.

Estimates (21)–(24) imply EkY k

p−1

kZk ≤ EkZk

p



δ

p−1p

+ δ

1/p

(1 + C

2

C

q p

0

) + C

3

(δC

0

)

qp

 .

Now we choose δ = δ(p) sufficiently small and then C

0

= C

0

(p, A, µ.q, λ) sufficiently large to obtain

EkY k

p−1

kZk ≤ 1

4p3

p

EkZk

p

. (25)

From Lemma 10 we deduce

EkY + Zk

p

≥ EkY k

p

+ 1

2 · 3

p

EkZk

p

. Hence

E

n+1

X

i=0

v

i

R

i

p

1

Ak

≥ 1 2 · 3

p

E

k−1

X

i=0

v

i

R

i

p

1

Ak

+ E

n+1

X

i=k

v

i

R

i

p

1

Ak

. (26)

Lemma 7 and (18) yield

E

k−1

X

i=0

v

i

R

i

p

1

Ak

≥ µ

p

8

p

kv

0

k

p

P(R

2,k

≤ 2

1q

λ

k−1

) ≥ 1 2 · µ

p

8

p

kv

0

k

p

. It follows that

1 2 · 3

p

E

k−1

X

i=0

v

i

R

i

p

1

Ak

≥ ε

0

kv

0

k

p

+ ε

0

E

k−1

X

i=0

v

i

R

i

p

1

Ak

. (27)

By the induction assumption we obtain

E

n+1

X

i=k

v

i

R

i

p

1

Ak

≥ ε

0

Ekv

k

R

k

k

p

1

Ak

+

n+1

X

i=k+1

 ε

1

k − c

i−k



Ekv

i

R

k

k

p

1

Ak

. (28)

(13)

Combining (26), (27) and (28) we get

E

n+1

X

i=0

v

i

R

i

p

1

Ak

≥ ε

0

kv

0

k

p

+ ε

0

E

k−1

X

i=0

v

i

R

i

p

1

Ak

+ ε

0

Ekv

k

R

k

k

p

1

Ak

+

n+1

X

i=k+1

 ε

1

k − c

i−k



Ekv

i

R

k

k

p

1

Ak

≥ ε

0

kv

0

k

p

+ ε

0

2

p−1

E

k

X

i=0

v

i

R

i

p

1

Ak

+

n+1

X

i=k+1

 ε

1

k − c

i−k



Ekv

i

R

k

k

p

1

Ak

. This inequality together with (17) and Lemma 8 yields

E

n+1

X

i=0

v

i

R

i

p

≥ ε

0

kv

0

k

p

+ ε

0

2

p−1

E

k

X

i=0

v

i

R

i

p

+

n+1

X

i=k+1

 ε

1

k − c

i−k



Ekv

i

R

k

k

p

≥ ε

0

kv

0

k

p

+ ε

1

k

k

X

i=1

kv

i

k

p

+

n+1

X

i=k+1

 ε

1

k − c

i−k

 kv

i

k

p

≥ ε

0

kv

0

k

p

+

n+1

X

i=1

 ε

1

k − c

i



kv

i

k

p

.

We are ready to prove the lower L

p

-estimate for p > 1.

Proof of the lower bound in Theorem 3. For sufficiently large k we have for all i, c

i

≤ Φλ

k

1 − λ = C

0

λ

pk

1 − λ ≤ ε

1

2k . Thus, Proposition 11 yields

E

n

X

i=0

v

i

R

i

p

≥ ε

0

kv

0

k

p

+ ε

1

2k

n

X

i=1

kv

i

k

p

≥ ε

n

X

i=0

kv

i

k

p

,

where ε := min{ε

0

,

2kε1

}.

Remark. Observe that µ ≤ E|X

i

− EX

i

| ≤ 2EX

i

≤ 2(EX

ip

)

1/p

= 2. This shows that ε

0

= µ

p

8 · 24

p

, ε

1

= µ

2p

2

p−1

64

p

· ε

0

and min n

ε

0

, ε

1

2k o

= µ

3p

8k · 2

10p

· 3

p

.

(14)

Other constants used in the proof of Proposition 11 may be estimated as follows ε

2

= µ

p

λ

p

8

p

A

p

ε

0

2

p

q

≥  3λ 2A



p

, C

1

= (1 − λ)

(1−p)q p

ε

q p

2

≤ (1 − λ)

(1−p)qp

 2A 3λ



q

,

C

2

≤ p − 1

q + 1 − p C

1

and C

3

= 2

q p

2

q+1−p

p

− 1

C

1

≤ 2p

(q + 1 − p) ln 2 C

1

. Hence we can for example take

δ := 48

p2

min{p−1,1}

and C

0

:= (1 − λ)

1−p

 2A 3λ



p



2p (q + 1 − p) ln 2



pq

48

2p2 min{p−1,1}

,

then each term δ

(p−1)/p

, δ

1/p

, δ

1/p

C

2

C

0−q/p

and C

3

(δC

0

)

−q/p

is not greater than 48

−p

≤ (16p3

p

)

−1

and (25) holds.

3 Lower bound - p ≤ 1

In this section we prove the lower bound in Theorem 2. We will also assume normalization (15) and use similar notation as for p > 1.

We begin with a result similar to Lemma 7.

Lemma 12. Let X be a nonnegative random variable such that EX

p

= 1. Then for every A > 1 and u, v in a normed space (F, k k) we have

EkuX + vk

p

≥ EkuX + vk

p

1

{Xp≤A}

≥ δ max{kuk

p

, kvk

p

}, where

δ := E(X

p

− 1) 1

{1≤Xp≤A}

. Proof. Since EX

p

= 1 we have

δ = E(X

p

− 1) 1

{1≤Xp≤A}

≤ E(X

p

− 1) 1

{1≤Xp}

= E(1 − X

p

) 1

{Xp≤1}

≤ P(X

p

≤ 1) ≤ P(X

p

≤ A). (29) The triangle inequality yields kuX + vk ≥

kukX − kvk

. Thus, it suffices to prove E

kukX − kvk

p

1

{Xp≤A}

≥ δ max{kuk

p

, kvk

p

}. (30) If u = 0 then this inequality is satisfied due to (29). In the case u 6= 0 divide both sides of (30) by kuk

p

to see that it is enough to show

E|X − t|

p

1

{Xp≤A}

≥ δ max{t

p

, 1} for t ≥ 0.

(15)

To prove this inequality let us consider two cases. First assume that t ∈ [0, 1]. Then we have

E|X − t|

p

1

{Xp≤A}

≥ E|X − t|

p

1

{1≤Xp≤A}

≥ E(X

p

− t

p

) 1

{1≤Xp≤A}

≥ E(X

p

− 1) 1

{1≤Xp≤A}

= δ = δ max{t

p

, 1}.

In the case t > 1 it suffices to note that

E|X − t|

p

1

{Xp≤A}

≥ E|X − t|

p

1

{Xp≤1}

≥ E(t

p

− X

p

)1

{Xp≤1}

≥ E(t

p

− t

p

X

p

) 1

{Xp≤1}

= t

p

E(1 − X

p

) 1

{Xp≤1}

≥ δt

p

= δ max{t

p

, 1}, where the last inequality follows from (29).

As a consequence, in the same way as in Lemma 8, we derive the following estimate.

Lemma 13. Let r.v.’s X

1

, X

2

, . . . satisfy (15) and (4). Then for any vectors v

0

, v

1

, . . . , v

n

∈ F we get

E

n

X

i=0

v

i

R

i

p

≥ δ

2

max

1≤i≤n

kv

i

k

p

≥ δ

2

n

n

X

i=1

kv

i

k

p

.

Lemma 14. Suppose that random variables X

1

, X

2

, . . . satisfy assumptions (15) and (3).

Then for all vectors v

1

, v

2

, . . . in (F, k k) we have

P

n

X

i=0

v

i

R

i

p

≥ t

1 − λ

n

X

i=0

λ

i

kv

i

k

p

!

≤ 1

t for t > 0.

Proof. Note that

E

n

X

i=0

v

i

R

i

p/2

n

X

i=0

kv

i

k

p/2

ER

p/2i

n

X

i=0

λ

i

kv

i

k

p/2

. By the Cauchy-Schwarz inequality we get

n

X

i=0

λ

i

kv

i

k

p/2

!

2

n

X

i=0

λ

i

n

X

i=0

λ

i

kv

i

k

p

≤ 1 1 − λ

n

X

i=0

λ

i

kv

i

k

p

. Thus, using Chebyshev’s inequality we arrive at

P

n

X

i=0

v

i

R

i

p

≥ t

1 − λ

n

X

i=0

λ

i

kv

i

k

p

!

≤ P

n

X

i=0

v

i

R

i

p/2

≥ √ t

n

X

i=0

λ

i

kv

i

k

p/2

≤ √ t

n

X

i=0

λ

i

kv

i

k

p/2

!

−1

E

n

X

i=0

v

i

R

i

p/2

≤ 1

√ t .

(16)

Our next lemma is in the spirit of Lemma 10, but it has a simpler proof.

Lemma 15. Let Y, Z be random vectors with values in a normed space (F, k k) such that EkZk

p

1

{kY kp1

8EkZkp}

≤ 1

8 EkZk

p

. Then

EkY + Zk

p

≥ EkY k

p

+ 1

2 EkZk

p

. Proof. We have for any u, v ∈ F , ku + vk

p

kuk − kvk

p

≥ kuk

p

− kvk

p

, therefore EkY + Zk

p

≥ E(kY k

p

+ kZk

p

− 2kZk

p

) 1

{kY kp1

8EkZkp}

+ E(kY k

p

+ kZk

p

− 2kY k

p

) 1

{kY kp<18EkZkp}

≥ EkY k

p

+ EkZk

p

− 2EkZk

p

1

{kY kp1

8EkZkp}

− 2EkY k

p

1

{kY kp<18EkZkp}

≥ EkY k

p

+ EkZk

p

− 2 · 1

8 EkZk

p

− 2 · 1

8 EkZk

p

= EkY k

p

+ 1

2 EkZk

p

.

The proof of the lower bound for p ≤ 1 is similar to the proof for p > 1 and it relies on a proposition similar to Proposition 11.

Proposition 16. Let 0 < p ≤ 1 and r.v.’s X

1

, X

2

, . . . satisfy assumptions (15), (3) and (4). Then for any vectors v

0

, v

1

, . . . , v

n

in a normed space (F, k k) and any integer k ≥ 1 we have

E

n

X

i=0

v

i

R

i

p

≥ ε

0

kv

0

k

p

+

n

X

i=1

 ε

1

k − c

i



kv

i

k

p

, where ε

0

= δ/8, ε

1

= δ

3

/8 and

c

i

= 0 for 1 ≤ i ≤ k − 1, c

i

= Φ

i

X

j=k

λ

j

for i ≥ k and Φ = 2

8

A 1 − λ λ

k−2

.

Proof. For n ≤ k the assertion follows by Lemmas 12 and 13, since ε

0

≤ δ/2 and ε

1

/k ≤ ε

1

/n ≤ δ

2

/(2n). For n ≥ k we proceed by induction on n.

Case 1. ε

0

kv

0

k

p

≤ Φ P

n+1

i=k

λ

i

kv

i

k

p

.

In this case the induction step is the same as in the proof of Proposition 11.

Case 2. ε

0

kv

0

k

p

> Φ P

n+1

i=k

λ

i

kv

i

k

p

. Let us define the set

A

k

:= {X

1p

≤ A, R

p2,k

≤ 4λ

2k−2

}.

(17)

By the induction hypothesis we have

E

n+1

X

i=0

v

i

R

i

p

1

Ω\Ak

≥ ε

0

E

k

X

i=0

v

i

R

i

p

1

Ω\Ak

+

n+1

X

i=k+1

 ε

1

k − c

i−k



Ekv

i

R

k

k

p

1

Ω\Ak

. (31) By Chebyshev’s inequality and (3) we get

P(R

p2,k

> 4λ

2k−2

) ≤ ER

p/22,k

k−1

≤ 1

2 , (32)

in particular P(A

k

) > 0. Let Y, Y

0

, Z be defined as in the proof of Proposition 11. As in (19) we show that Lemma 12 yields EkZk

p

≥ δkv

0

k

p

. We have kY k

p

≤ 4Aλ

2k−2

kY

0

k

p

, variables Y

0

and Z are independent and Y

0

has the same distribution as P

n+1

i=k

v

i

R

k+1,i

. Thus,

EkZk

p

1

{kY kp1

8EkZkp}

≤ EkZk

p

1

{4Aλ2k−2kY0kpδ

8kv0kp}

= EkZk

p

P



kY

0

k

p

≥ 1

4Aλ

2k−2

ε

0

kv

0

k

p



≤ EkZk

p

P

n+1

X

i=k

v

i

R

k+1,i

p

≥ 2

6

1 − λ

n+1

X

i=k

λ

i−k

kv

i

k

p

!

≤ 1

8 EkZk

p

,

where the second inequality follows by the assumptions of Case 2 and the definition of Φ and the last one by Lemma 14. Hence, Lemma 15 yields

EkY + Zk

p

≥ EkY k

p

+ 1

2 EkZk

p

. Thus

E

n+1

X

i=0

v

i

R

i

p

1

Ak

≥ 1 2 E

k−1

X

i=0

v

i

R

i

p

1

Ak

+ E

n+1

X

i=k

v

i

R

i

p

1

Ak

. (33) Using Lemma 12 and (32) we obtain

E

k−1

X

i=0

v

i

R

i

p

1

Ak

≥ δkv

0

k

p

P(R

2,k

≤ 4λ

2k−2

) ≥ δ 2 kv

0

k

p

. Since ε

0

14

and ε

0

≤ δ/8 it follows that

1 2 E

k−1

X

i=0

v

i

R

i

p

1

Ak

≥ ε

0

kv

0

k

p

+ ε

0

E

k−1

X

i=0

v

i

R

i

p

1

Ak

. (34)

(18)

By the induction assumption we obtain

E

n+1

X

i=k

v

i

R

i

p

1

Ak

≥ ε

0

Ekv

k

R

k

k

p

1

Ak

+

n+1

X

i=k+1

 ε

1

k − c

i−k



Ekv

i

R

k

k

p

1

Ak

. (35) Combining (33), (34) and (35) we arrive at

E

n+1

X

i=0

v

i

R

i

p

1

Ak

≥ ε

0

kv

0

k

p

+ ε

0

E

k−1

X

i=0

v

i

R

i

p

1

Ak

+ ε

0

Ekv

k

R

k

k

p

1

Ak

+

n+1

X

i=k+1

 ε

1

k − c

i−k



Ekv

i

R

k

k

p

1

Ak

≥ ε

0

kv

0

k

p

+ ε

0

E

k

X

i=0

v

i

R

i

p

1

Ak

+

n+1

X

i=k+1

 ε

1

k − c

i−k



Ekv

i

R

k

k

p

1

Ak

. Combining this inequality with (31) yields

E

n+1

X

i=0

v

i

R

i

p

≥ ε

0

kv

0

k

p

+ ε

0

E

k

X

i=0

v

i

R

i

p

+

n+1

X

i=k+1

 ε

1

k − c

i−k



Ekv

i

R

k

k

p

≥ ε

0

kv

0

k

p

+ ε

1

k

k

X

i=1

kv

i

k

p

+

n+1

X

i=k+1

 ε

1

k − c

i−k

 kv

i

k

p

≥ ε

0

kv

0

k

p

+

n+1

X

i=1

 ε

1

k − c

i



kv

i

k

p

, where in the second inequality we used Lemma 13.

We are now ready to establish the lower L

p

-bound for p ≤ 1.

Proof of the lower bound in Theorem 2. To show the lower bound let us choose k such that kλ

2k−2

≤ δ

3

(1 − λ)

2

2

12

A . Then

c

i

≤ Φ λ

k

1 − λ = 2

8

2k−2

(1 − λ)

2

≤ ε

1

2k . Therefore, Proposition 16 implies

E

n

X

i=0

v

i

R

i

p

≥ δ

8 kv

0

k

p

+ δ

3

16k

n

X

i=1

kv

i

k

p

≥ δ

3

16k

n

X

i=0

kv

i

k

p

.

(19)

4 Upper bounds

The upper bound in Theorem 2 immediately follows by the inequality (a + b)

p

≤ a

p

+ b

p

, a, b ≥ 0, p ∈ (0, 1]. To get the upper bound in Theorem 3 we prove the following slightly more general result.

Proposition 17. Let p > 0 and X

1

, X

2

, . . . be independent random variables such that E|X

i

|

p

< ∞ for all i and

1≤k<dpe

λk<1

i

(E|X

i

|

p−k

)

1/(p−k)

≤ λ

k

(E|X

i

|

p−k+1

)

1/(p−k+1)

. (36) Then for any vectors v

0

, v

1

, . . . , v

n

in a normed space (F, k k) we have

E

n

X

i=0

v

i

R

i

p

≤ C(p)

n

X

i=0

kv

i

k

p

E|R

i

|

p

, (37)

where C(p) = 1 for p ≤ 1 and for p ≥ 1,

C(p) = 2

p

1 + C(p − 1) λ

p−11

1 − λ

p−11

!

≤ 2

p

C(p − 1) 1 − λ

p−11

. Proof. We have k P

n

i=0

v

i

R

i

k ≤ P

n

i=0

kv

i

k|R

i

| and |R

i

| = Q

i

j=1

|X

j

|, so it is enough to consider the case when F = R, v

k

≥ 0 and variables X

j

are nonnegative. Since it is only a matter of normalization we may also assume that EX

ip

= 1 for all i.

We proceed by induction on m := dpe. If m = 1, i.e. 0 < p ≤ 1 then the assertion easily follows, since (x + y)

p

≤ x

p

+ y

p

, x, y ≥ 0.

Suppose that m > 1 and (37) holds in the case p ≤ m. Take p such that m < p ≤ m + 1.

Observe that

(x + y)

p

≤ x

p

+ 2

p

(yx

p−1

+ y

p

) for x, y ≥ 0. (38) Indeed, either x ≤ y and then (x + y)

p

≤ 2

p

y

p

, or 0 ≤ y < x and then by the convexity of x

p

, ((x + y)

p

− x

p

)/y ≤ ((2x)

p

− x

p

)/x = (2

p

− 1)x

p−1

.

We have by (38)

E

n

X

i=0

v

i

R

i

p

≤ E

n

X

i=1

v

i

R

i

p

+ 2

p

v

0

E

n

X

i=1

v

i

R

i

p−1

+ v

p0

 . Iterating this inequality we get

E

n

X

i=0

v

i

R

i

p

≤ v

pn

ER

np

+ 2

p

n−1

X

k=0

v

k

ER

k n

X

i=k+1

v

i

R

i

!

p−1

+

n−1

X

i=0

v

pi

ER

pi

 .

Cytaty

Powiązane dokumenty

We seek for classical solutions to hyperbolic nonlinear partial differential- functional equations of the second order.. We give two theorems on existence and unique- ness for

In this paper, (r„) denotes a Rademacher sequence independent of all other random variables under consideration.. It was conjectured by

A similar problem, namely that of finding conditions under which the product of independent random variables with beta distribution has also the beta

CUMULATIVE DISTRIBUTION FUNCTION, EXPECTED VALUE – INTRO... Plan

When verifying independence of random variables, we may – in some cases – be able to decompose these random variables into functions of simpler random variables (or, inversely,

If during a given month there are no insurance claims, the discount for the next month grows by 10 percentage points (until reaching the maximum level of 30%); if there is a claim,

Szynal, On Levy’ s and Dudley ’ s type estimates of the rate conver ­ gence in the cental limit theorem for functions of the average of independent random

of a Function of the Average of Independent Random Variables O funkcjonałowym centralnym twierdzeniu granicznym dla funkcji średnich arytmetycznych niezależnych zmiennych losowych..