Two-sided bounds for L p -norms of combinations of products of independent random variables
Ewa Damek
∗, Rafa l Lata la
†, Piotr Nayar and Tomasz Tkocz April 2, 2014
Abstract
We show that for every positive p, the L
p-norm of linear combinations (with scalar or vector coefficients) of products of i.i.d. nonnegative random variables with the p- norm one is comparable to the l
p-norm of the coefficients and the constants are explicit.
As a result the same holds for linear combinations of Riesz products.
We also establish the upper and lower bounds of the L
p-moments of partial sums of perpetuities.
Key words and phrases: estimation of moments, product of independent random variables, Riesz’s product, stochastic difference equation, perpetuity.
AMS 2010 Subject classification: Primary 60E15, Secondary 60H25.
1 Introduction and Main Results
Let X, X
1, X
2, . . . be i.i.d. nondegenerate nonnegative r.v.’s with finite mean. Define R
0:= 1 and R
i:=
i
Y
j=1
X
jfor i = 1, 2, . . . . (1) Then obviously for any vectors v
0, v
1, . . . , v
nin a normed space (F, k k), Ek P
ni=0
v
iR
ik ≤ P
ni=0
kv
ikER
i. In [17] it was shown that the opposite inequality holds, i.e.
E
n
X
i=0
v
iR
i≥ c
Xn
X
i=0
kv
ikER
i,
where c
Xis a constant, which depends only on the distribution of X.
In this paper we present similar estimates for L
p-norms. Our main result is the follow- ing.
∗
Research supported by the NCN grant DEC-2012/05/B/ST1/00692 and by Warsaw Center of Mathe- matics and Computer Science .
†
Research supported by the NCN grant DEC-2012/05/B/ST1/00412.
Theorem 1. Let p > 0 and X, X
1, X
2, . . . be i.i.d. nondegenerate nonnegative r.v.’s such that EX
p< ∞ and let R
ibe defined by (1). Then there exist constants 0 < c
p,X≤ C
p,X<
∞ which depend only on p and the distribution of X such that for any vectors v
0, v
1, . . . , v
nin a normed space (F, k k),
c
p,Xn
X
i=0
kv
ik
pER
ip≤ E
n
X
i=0
v
iR
ip
≤ C
p,Xn
X
i=0
kv
ik
pER
pi.
In fact we prove a more general result that does not require the identical distribution assumption. Namely, suppose that
X
1, X
2, . . . are independent, nonnegative r.v.’s such that EX
ip< ∞. (2) Further assumptions depend on whether p ≤ 1. For p ∈ (0, 1] we assume that
∃
λ<1∀
iEX
ip/2≤ λ(EX
ip)
1/2(3) and
∃
0<δ<1,A>1∀
iE(X
ip− EX
ip) 1
{EXip≤Xip≤AEXip}≥ δEX
ip. (4) Theorem 2. Let 0 < p ≤ 1 and X
1, X
2, . . . satisfy assumptions (2), (3) and (4). Then for any vectors v
0, v
1, . . . , v
nin a normed space (F, k k) we have
c(p, λ, δ, A)
n
X
i=0
kv
ik
pER
ip≤ E
n
X
i=0
v
iR
ip
≤
n
X
i=0
kv
ik
pER
pi, where c(p, λ, δ, A) is a constant which depends only on p, λ, δ and A.
For p > 1 to obtain the lower bound we assume that
∃
µ>0,A<∞∀
iE|X
i− EX
i| ≥ µ(EX
ip)
1/pand E|X
i− EX
i| 1
{Xi>A(EXpi)1/p}
≤ 1
4 µ(EX
ip)
1/p(5) and
∃
q>max{p−1,1}∃
λ<1∀
i(EX
iq)
1/q≤ λ(EX
ip)
1/p. (6) For the upper bound we need the condition
∀
k=1,2,...,dpe−1∃
λk<1∀
i(EX
ip−k)
1/(p−k)≤ λ
k(EX
ip−k+1)
1/(p−k+1). (7) Theorem 3. Let p > 1 and X
1, X
2, . . . satisfy assumptions (2), (5), (6) and (7). Then for any vectors v
0, v
1, . . . , v
nin a normed space (F, k k) we have
c(p, µ, A, q, λ)
n
X
i=0
kv
ik
pER
pi≤ E
n
X
i=0
v
iR
ip
≤ C(p, λ
1, . . . , λ
dpe−1)
n
X
i=0
kv
ik
pER
pi,
where c(p, µ, A, q, λ) is a positive constant which depends only on p, µ, A, q and λ and
C(p, λ
1, . . . , λ
dpe−1) is a constant which depends only on p, λ
1, . . . , λ
dpe−1.
Remark. Proofs presented below show that Theorem 2 holds with c(p, λ, δ, A) = δ
316k , where k is an integer such that kλ
2k−2≤ δ
3(1 − λ)
22
12A . In Theorem 3 we can take
C(p, λ
1, . . . , λ
dpe−1) = 2
p(p+1)2Y
1≤j≤dpe−1
1 1 − λ
p−jjand
c(p, µ, A, q, λ) = µ
3p8k · 2
10p· 3
p, where k is an integer such that kλ
pk≤ (1 − λ)µ
3p8C
0· 2
10p· 3
p, C
0= (1 − λ)
1−p2A
3λ
p2p (q + 1 − p) ln 2
pq48
2p2 min{p−1,1}
.
Theorem 1 yields by conditioning a similar result for products of symmetric r.v.’s.
Corollary 4. Let p > 0 and X, X
1, X
2, . . . be an i.i.d. sequence of symmetric r.v.’s such that E|X|
p< ∞ and P(|X| = t) < 1 for all t. Then there exist constants 0 < c
p,X≤ C
p,X< ∞ which depend only on p and the distribution of X such that for any vectors v
0, v
1, . . . , v
nin a normed space (F, k k),
c
p,Xn
X
i=0
kv
ik
pE|R
i|
p≤ E
n
X
i=0
v
iR
ip
≤ C
p,Xn
X
i=0
kv
ik
pE|R
i|
p.
Proof. Let (ε
i) be a sequence of independent symmetric ±1 r.v.’s, independent of (X
i).
Then
E
n
X
i=0
v
iR
ip
= E
n
X
i=0
v
ii
Y
k=1
ε
ki
Y
k=1
|X
k|
p
and it is enough to use Theorem 1 for variables (|X
i|).
Another consequence of Theorem 1 is an estimate for L
p-norms of linear combinations of Riesz products. Let T = R/2πZ be the one dimensional torus and m be the normalized Haar measure on T. Riesz products are defined on T by the formula
R ¯
i(t) =
i
Y
j=1
(1 + cos(n
jt)), i = 1, 2, . . . , (8) where (n
k)
k≥1is a lacunary increasing sequence of positive integers.
It is well known that if coefficients n
kgrow sufficiently fast then k P
ni=0
a
iR ¯
ik
Lp(T)∼ (E| P
ni=0
a
iR
i|
p)
1/pfor p ≥ 1, where R
iare products of independent random variables distributed as ¯ R
1. Together with Theorem 1 this gives an estimate for k P
ni=0
a
iR ¯
ik
Lp(T).
Here is the more quantitative result.
Corollary 5. Suppose that (n
k)
k≥1is an increasing sequence of positive integers such that n
k+1/n
k≥ 3 and P
∞k=1 nk
nk+1
< ∞. Then for any coefficients a
0, a
1, . . . , a
n∈ R and p ≥ 1, c
pn
X
i=0
|a
i|
pZ
T
| ¯ R
i(t)|
pdm(t) ≤ Z
T
n
X
i=0
a
iR ¯
i(t)
p
dm(t) ≤ C
p nX
i=0
|a
i|
pZ
T
| ¯ R
i(t)|
pdm(t), (9) where 0 < c
p≤ C
p< ∞ are constants depending only on p and the sequence (n
k).
Proof. Let X
1, X
2, . . . be independent random variables distributed as 1 + cos(Y ), where Y is uniformly distributed on [0, 2π] and R
ibe as in (1). By the result of Y. Meyer [18], k P
ni=0
a
iR ¯
ik
Lp∼ (E| P
ni=0
a
iR
i|
p)
1/p(in particular also k ¯ R
ik
Lp∼ (ER
pi)
1/p) and the estimate follows by Theorem 1.
Theorem 1 has also an immediate application to the stationary solution S of the random difference equation
S = XS + B, (10)
where the equality is meant in law, (X, B) is a random variable with values in [0, ∞) × R
dindependent of S such that for some p > 0,
EX
p= 1, EkBk
p< ∞ and P(X = 1) < 1. (GK1) Over the last 40 years equation (10) and its various modifications have attracted a lot of attention [1, 2, 3, 5, 8, 11, 12, 13, 14, 15, 16, 19, 20]. It has a rather wide spectrum of applications including random walks in random environment, branching processes, frac- tals, finance, insurance, telecommunications, various physical and biological models. In particular, the tail behaviour of S is of interest.
It is well known that in law
S =
∞
X
i=1
R
i−1B
i,
where R
i−1= X
1· · · X
i−1, R
0= 1 and (X
i, B
i) is an i.i.d sequence of r.v.’s with the same distribution as (X, B). Under the additional assumption that
log X conditioned on {X 6= 0} is non lattice and EX
plog
+X < ∞, (GK2) S has a heavy tail behaviour, i.e. the limit
t→∞
lim t
pP(kSk > t) = c
∞(X, B)
exists and c
∞(X, B) is strictly positive provided that P(Xv + B = v) < 1 for every v ∈ R
d.
If P(Xv +B = v) = 1 then S
n= v − R
n−1v → v = S. Assumptions (GK1), (GK2) together
with P(Xv + B = v) < 1 will be later on referred to as the Goldie-Kesten conditions. Let S
n=
n
X
i=1
R
i−1B
i.
It turns out that the sequence EkS
nk
pis closely related to c
∞(X, B). Recently, it has been proved in [6] that under the Goldie-Kesten conditions plus a little bit stronger moment assumption E(X
p+ε+ kBk
p+ε) < ∞ for some ε > 0, we have
n→∞
lim 1
npρ EkS
nk
p= c
∞(X, B) > 0, where ρ := EX
plog X.
Now suppose that X, B are independent. Then Theorem 1 implies that for every n c
p,XEkBk
p≤ 1
n EkS
nk
p≤ C
p,XEkBk
p, (11) which gives uniform bounds on the Goldie constant c
∞(X, B) depending only on the law of X and EkBk
pand independent of the dimension. Moreover, in some particular cases when constants λ, δ, µ, q, λ
kin (3)–(7) can be estimated more carefully, (11) may give some information about the size of the Goldie constant which is of some value, especially in the situation when none of the existing formulae for it is satisfactory enough (see [7, 10, 6, 4]).
We can go even further. With a slight modification of the proof we can get rid of independence of X, B and obtain the following theorem.
Theorem 6. Suppose that F is a separable Banach space. Let p > 0 and let an i.i.d.
sequence (X, B), (X
1, B
1), ... with values in [0, ∞) × F be such that X is nondegenerate and EkBk
p, EX
p< ∞. Assume additionally that
P(Xv + B = v) < 1 for every v ∈ F. (12) Then there are constants 0 < c
p(X, B) ≤ C
p(X, B) < ∞ which depend on p and the distribution of (X, B) such that for every n,
c
p(X, B)EkBk
pn
X
i=1
ER
i−1p≤ E
n
X
i=1
R
i−1B
ip
≤ C
p(X, B)EkBk
pn
X
i=1
ER
pi−1. (13)
Theorem 6 specified to our situation with EX
p= 1 says c
p(X, B)EkBk
p≤ 1
n EkS
nk
p≤ C
p(X, B)EkBk
p, (14)
This gives an estimate for the Goldie constant but now with c
p(X, B), C
p(X, B) depending
on the law of (X, B). Again, in particular cases, a careful examination of the constants
involved in the proof may give a more satisfactory answer. Also, in view of Theorem 6, it would be worth relaxing the assumptions of [6].
The paper is organized as follows. In Section 2 and 3 we derive lower bounds in Theorems 2 and 3. Then in Section 4 we establish upper bounds in both theorems. We conclude in Section 5 with a discussion of the proof of Theorem 6.
2 Lower bound - p > 1
In this section we will show the lower bound in Theorem 3. Since it is only a matter of normalization we will assume that
X
1, X
2, . . . are independent, nonnegative r.v.’s such that EX
ip= 1. (15) In particular this implies that ER
pi= 1 for all i.
We also set for k = 1, 2, . . .
R
k,k−1≡ 1 and R
k,i:=
i
Y
j=k
X
ifor i ≥ k.
Observe that R
i= R
kR
k+1,ifor i ≥ k ≥ 0.
We begin with several lemmas.
Lemma 7. Suppose that a nonnegative random variable X satisfies E|X − EX| ≥ µ and E|X − EX|1
{X>A}≤
14µ. Then for all p ≥ 1 and u, v ∈ (F, k k) we have
EkuX + vk
p≥ EkuX + vk
p1
{X≤A}≥ µ
p8
pmin
1, 1
(EX)
pmax{kuk
p, kvk
p}.
Proof. Let Y has the same distribution as X conditioned on the set {X ≤ A}. Let us define t := EY ≤ EX. Clearly, E(X − EX)
+= E(X − EX)
−≥
12µ. Therefore,
E|X − t|1
{X≤A}≥ E(X − t)
+1
{X≤A}≥ E(X − EX)
+1
{X≤A}= E(X − EX)
+− E(X − EX)
+1
{X>A}≥ 1
2 µ − E|X − EX|1
{X>A}≥ 1 2 µ − 1
4 µ = 1 4 µ.
We obtain
EkuY + vk = 1
t Ekv(t − Y ) + (tu + v)Y k ≥ 1
t kvkE|Y − t| − ktu + vk.
Since EkuY + vk ≥ kuEY + vk = ktu + vk, we have EkuY + vk ≥ 1
2t kvkE|Y − t| = kvk
2tP(X ≤ A) E|X − t|1
{X≤A}≥ µ 8t
kvk
P(X ≤ A) .
We arrive at
EkuX + vk
p1
{X≤A}≥ EkuX + vk1
{X≤A} p= (EkuY + vkP(X ≤ A))
p≥ µ
p8
pt
pkvk
p≥ µ
p8
p(EX)
pkvk
p. We also have
EkuY + vk = Eku(Y − t) + tu + vk ≥ kukE|Y − t| − ktu + vk.
Therefore
EkuY + vk ≥ kuk
2 E|Y − t| ≥ µ 8
kuk P(X ≤ A) and as before we get that EkuX + vk
p1
{X≤A}≥
µ8ppkuk
p.
Lemma 8. Assume that (15) and (5) hold. Then for any v
0, v
1, . . . , v
n∈ (F, k k) we have
E
n
X
i=0
v
iR
ip
≥ µ
2p64
pmax
1≤i≤n
kv
ik
p≥ µ
2p64
p· 1
n
n
X
i=1
kv
ik
p.
Proof. For 1 ≤ j ≤ n we have P
ni=0
v
iR
i= Y + X
j(v
jR
j−1+ X
j+1Z), where Y and Z are independent of X
jand X
j+1. Observe that EX
j≤ 1 and EX
j+1≤ 1. Thus, using Lemma 7 twice, we obtain
E
n
X
i=0
v
iR
ip
≥ µ
p8
pEkv
jR
j−1+ X
j+1Zk
p≥ µ
2p64
pkv
jk
pE|R
j−1|
p= µ
2p64
pkv
jk
p.
Lemma 9. Assume that (15) holds and there exist q > 1 and 0 < λ < 1 such that for all i, (EX
iq)
1/q≤ λ. Then for any v
0, v
1, . . . , v
n∈ (F, k k) and t > 0,
P
n
X
i=0
v
iR
ip
≥ t
n
X
i=0
λ
ikv
ik
p!
≤ (1 − λ)
(1−p)qpt
−q p
. Proof. Using Minkowski’s and H¨ older’s inequalities we obtain
E
n
X
i=0
v
iR
iq
!
1q≤
n
X
i=0
(Ekv
iR
ik
q)
1q≤
n
X
i=0
kv
ikλ
i=
n
X
i=0
kv
ikλ
piλ
p−1 p i
≤
n
X
k=0
kv
ik
pλ
i!
1p nX
i=0
λ
i!
p−1p.
Thus,
E
n
X
i=0
v
iR
iq
≤
n
X
i=0
kv
ik
pλ
i!
qp
(1 − λ)
−(p−1)q p
. By Chebyshev’s inequality we get
P
n
X
i=0
v
iR
iq
≥ t
qpn
X
i=0
λ
ikv
ik
p!
qp
≤ (1 − λ)
(1−p)qpt
−qp.
Lemma 10. Let Y, Z be random vectors with values in a normed space F and let p ≥ 1.
Suppose that EkY k
p−1kZk ≤ γEkZk
p. Then EkY + Zk
p≥ EkY k
p+ 1
3
p− 2pγ
EkZk
p.
Proof. For any real numbers a, b we have |a + b|
p≥ |a|
p− p|a|
p−1|b|. If, additionally,
|a| ≤
13|b|, then |a + b|
p≥ |a|
p+
31p|b|
p. Taking a = kY k, b = −kZk and using the inequality kY + Zk ≥ |kY k − kZk| we obtain
EkY + Zk
p= EkY + Zk
p1
{kY k≤13kZk}
+ EkY + Zk
p1
{kY k>13kZk}
≥ EkY k
p1
{kY k≤13kZk}
+ 1
3
pEkZk
p1
{kY k≤13kZk}
+ EkY k
p1
{kY k>13kZk}
− pEkY k
p−1kZk 1
{kY k>13kZk}
= EkY k
p+ 1
3
pEkZk
p(1 − 1
{kY k>13kZk}
) − pEkY k
p−1kZk 1
{kY k>13kZk}
. Note that
E 1
3
pkZk
p+ pEkY k
p−1kZk
1
{kY k>13kZk}
≤ 1 3 + p
EkY k
p−1kZk ≤ 2pγEkZk
p. Therefore,
EkY + Zk
p≥ EkY k
p+ 1
3
pEkZk
p− 2pγEkZk
p.
We are now able to state the key proposition which will easily yield the lower bound in
Theorem 3.
Proposition 11. Let p > 1 and r.v.’s X
1, X
2, . . . satisfy assumptions (15), (5) and (6).
Then there exist constants ε
0, ε
1, C
0> 0 depending only on p, µ, A, q and λ such that for any vectors v
0, v
1, . . . , v
nin a normed space (F, k k) and k ≥ 1 we have
E
n
X
i=0
v
iR
ip
≥ ε
0kv
0k
p+
n
X
i=1
ε
1k − c
ikv
ik
p, (16)
where
c
i= 0 for 1 ≤ i ≤ k − 1, c
i= Φ
i
X
j=k
λ
jfor i ≥ k and Φ = C
0λ
(p−1)k.
Proof. Define
ε
0:= min
1
4 · 3
p, µ
p8 · 24
p, ε
1:= min µ
p8
p, µ
2p2
p−164
pε
0,
the value of C
0will be chosen later. In the proof by ε
2, C
2, C
3we will denote finite nonnegative constants that depend only on parameters p, µ.A, q and λ.
We fix k ≥ 1 and prove (16) by induction on n. From Lemma 7 and Lemma 8 we obtain E
n
X
i=0
v
iR
ip
≥ 2ε
0kv
0k
p, E
n
X
i=0
v
iR
ip
≥ 2ε
1n
n
X
i=1
kv
ik
p. Therefore for n ≤ k we have
E
n
X
i=0
v
iR
ip
≥ ε
0kv
0k
p+ ε
1k
n
X
i=1
kv
ik
p.
Suppose that the induction assertion holds for n ≥ k. We show it for n + 1. To this end we consider two cases.
Case 1. ε
0kv
0k
p≤ Φ P
n+1i=k
λ
ikv
ik
p.
Applying the induction assumption conditionally on X
1we obtain E
n+1
X
i=0
v
iR
ip
≥ ε
0Ekv
0+ v
1X
1k
p+
n+1
X
i=2
ε
1k − c
i−1EkX
1v
ik
p≥ ε
1k kv
1k
p+
n+1
X
i=2
ε
1k − c
i−1kv
ik
p≥ ε
0kv
0k
p− Φ
n+1
X
i=k
λ
ikv
ik
p+ ε
1k kv
1k
p+
n+1
X
i=2
ε
1k − c
i−1kv
ik
p= ε
0kv
0k
p+
n+1
X
i=1
ε
1k − c
ikv
ik
p,
where the second inequality follows from Lemma 7.
Case 2. ε
0kv
0k
p> Φ P
n+1i=k
λ
ikv
ik
p.
Define the event A
k∈ σ(X
1, . . . , X
k) by
A
k:= {X
1≤ A, R
2,k≤ 2
1qλ
k−1}.
By the induction assumption used conditionally on X
1, . . . , X
kwe have
E
n+1
X
i=0
v
iR
ip
1
Ω\Ak≥ ε
0E
k
X
i=0
v
iR
ip
1
Ω\Ak+
n+1
X
i=k+1
ε
1k − c
i−kEkv
iR
kk
p1
Ω\Ak. (17) We have by Chebyshev’s inequality and (6),
P
R
2,k≥ 2
1qλ
k−1≤ ER
2,kq2λ
(k−1)q≤ 1
2 . (18)
Together with (5) it implies P(A
k) > 0. Let (Y, Y
0, Z) have the same distribution as the random vector ( P
n+1i=k
v
iR
i, P
n+1i=k
v
iR
k+1,i, P
k−1i=0
v
iR
i) conditioned on the event A
k. Note that
E
n+1
X
i=0
v
iR
ip
1
Ak= P(A
k)EkY + Zk
p. Applying Lemma 7 conditionally we obtain
EkZk
p= 1 P(A
k) E
k−1
X
i=0
v
iR
ip
1
{X1≤A}1
{R2,k≤21qλk−1}
≥ µ
p8
pkv
0k
pP(R
2,k≤ 2
1qλ
k−1) P(A
k) = µ
p8
pkv
0k
p1
P(X
1≤ A) ≥ µ
p8
pkv
0k
p. (19) Note that Y
0has the same distribution as P
n+1i=k
v
iR
k+1,iand is independent of Z. We have for t > 0,
P(kY k
p≥ tEkZk
p) ≤ P
A
pλ
p(k−1)2
p
q
kY
0k
p≥ t µ
p8
pkv
0k
p≤ P A
pλ
p(k−1)2
p
q
kY
0k
p≥ t µ
p8
pΦ ε
0n+1
X
i=k
λ
ikv
ik
p!
= P kY
0k
p≥ tC
0ε
2n+1
X
i=k
λ
i−kkv
ik
p!
≤ C
1(tC
0)
−qp, (20)
where the last inequality follows by Lemma 9 (recall that ε
2and C
1denote constants
depending on p, µ, A, q and λ).
In order to use Lemma 10 we would like to estimate EkY k
p−1kZk. To this end take δ > 0 and observe first that
EkY k
p−1kZk ≤ EkY k
p−1kZk 1
{kY kp≤δEkZkp}+ EkY k
p−1kZk 1
{kZkp≤δEkZkp}+ EkY k
p−1kZk 1
{kY kp>δEkZkp}1
{kZkp>δEkZkp}. (21) Clearly,
EkY k
p−1kZk 1
{kY kp≤δEkZkp}≤ δ
p−1p(EkZk
p)
p−1
p
EkZk ≤ δ
p−1
p
EkZk
p. (22)
To estimate the next term in (21) note that
EkY k
p−1kZk 1
{kZkp≤δEkZkp}≤ δ
1/p(EkZk
p)
1/pEkY k
p−1. Using estimate (20) we obtain
EkY k
p−1= (EkZk
p)
p−1pZ
∞0
P
kY k
p≥ s
p−1pEkZk
pds
≤ (EkZk
p)
p−1 p
Z
∞ 0min{1, C
1C
−q p
0
s
−q
p−1
} ds ≤ (EkZk
p)
p−1 p
1 + C
2C
−q p
0
, where the last inequality follows since q > p − 1. Thus,
EkY k
p−1kZk 1
{kZkp≤δEkZkp}≤ δ
1/p1 + C
2C
−q p
0
EkZk
p. (23)
We are left with estimating the last term in (21). We have EkY k
p−1kZk 1
{kY kp>δEkZkp}1
{kZkp>δEkZkp}=
∞
X
m=0
EkY k
p−1kZk 1
{2mδEkZkp<kY kp≤2m+1δEkZkp}1
{kZkp>δEkZkp}≤
∞
X
m=0
2
(m+1)p−1 p
δ
p−1
p
E(EkZk
p)
p−1
p
kZk 1
{2mδEkZkp<kY kp}1
{kZkp>δEkZkp}≤ δ
p−1p∞
X
m=0
2
(m+1)p−1
p
E kZk
pδ
p−1pkZk 1
{2mδEkZkp<kY kp}=
∞
X
m=0
2
(m+1)p−1pEkZk
p1
{2mδEkZkp<kY kp}.
Recall that Z and Y
0are independent. Therefore as in (20) we get EkZk
p1
{2mδEkZkp<kY kp}≤ EkZk
p1
{kY0kp≥2mδC0ε2Pn+1i=k λi−kkvikp}
≤ EkZk
pP kY
0k
p≥ 2
mδC
0ε
2n+1
X
i=k
λ
i−kkv
ik
p!
≤ EkZk
pC
1(2
mδC
0)
−q p
.
We arrive at
EkY k
p−1kZk 1
{kY kp>δEkZkp}1
{kZkp>δEkZkp}≤ EkZk
pC
1(δC
0)
−q p
∞
X
m=0
2
(m+1)p−1 p
2
−mq p
≤ EkZk
pC
3(δC
0)
−qp, (24) where we have used the fact that q > p − 1.
Estimates (21)–(24) imply EkY k
p−1kZk ≤ EkZk
pδ
p−1p+ δ
1/p(1 + C
2C
−q p
0
) + C
3(δC
0)
−qp.
Now we choose δ = δ(p) sufficiently small and then C
0= C
0(p, A, µ.q, λ) sufficiently large to obtain
EkY k
p−1kZk ≤ 1
4p3
pEkZk
p. (25)
From Lemma 10 we deduce
EkY + Zk
p≥ EkY k
p+ 1
2 · 3
pEkZk
p. Hence
E
n+1
X
i=0
v
iR
ip
1
Ak≥ 1 2 · 3
pE
k−1
X
i=0
v
iR
ip
1
Ak+ E
n+1
X
i=k
v
iR
ip
1
Ak. (26)
Lemma 7 and (18) yield
E
k−1
X
i=0
v
iR
ip
1
Ak≥ µ
p8
pkv
0k
pP(R
2,k≤ 2
1qλ
k−1) ≥ 1 2 · µ
p8
pkv
0k
p. It follows that
1 2 · 3
pE
k−1
X
i=0
v
iR
ip
1
Ak≥ ε
0kv
0k
p+ ε
0E
k−1
X
i=0
v
iR
ip
1
Ak. (27)
By the induction assumption we obtain
E
n+1
X
i=k
v
iR
ip
1
Ak≥ ε
0Ekv
kR
kk
p1
Ak+
n+1
X
i=k+1
ε
1k − c
i−kEkv
iR
kk
p1
Ak. (28)
Combining (26), (27) and (28) we get
E
n+1
X
i=0
v
iR
ip
1
Ak≥ ε
0kv
0k
p+ ε
0E
k−1
X
i=0
v
iR
ip
1
Ak+ ε
0Ekv
kR
kk
p1
Ak+
n+1
X
i=k+1
ε
1k − c
i−kEkv
iR
kk
p1
Ak≥ ε
0kv
0k
p+ ε
02
p−1E
k
X
i=0
v
iR
ip
1
Ak+
n+1
X
i=k+1
ε
1k − c
i−kEkv
iR
kk
p1
Ak. This inequality together with (17) and Lemma 8 yields
E
n+1
X
i=0
v
iR
ip
≥ ε
0kv
0k
p+ ε
02
p−1E
k
X
i=0
v
iR
ip
+
n+1
X
i=k+1
ε
1k − c
i−kEkv
iR
kk
p≥ ε
0kv
0k
p+ ε
1k
k
X
i=1
kv
ik
p+
n+1
X
i=k+1
ε
1k − c
i−kkv
ik
p≥ ε
0kv
0k
p+
n+1
X
i=1
ε
1k − c
ikv
ik
p.
We are ready to prove the lower L
p-estimate for p > 1.
Proof of the lower bound in Theorem 3. For sufficiently large k we have for all i, c
i≤ Φλ
k1 − λ = C
0λ
pk1 − λ ≤ ε
12k . Thus, Proposition 11 yields
E
n
X
i=0
v
iR
ip
≥ ε
0kv
0k
p+ ε
12k
n
X
i=1
kv
ik
p≥ ε
n
X
i=0
kv
ik
p,
where ε := min{ε
0,
2kε1}.
Remark. Observe that µ ≤ E|X
i− EX
i| ≤ 2EX
i≤ 2(EX
ip)
1/p= 2. This shows that ε
0= µ
p8 · 24
p, ε
1= µ
2p2
p−164
p· ε
0and min n
ε
0, ε
12k o
= µ
3p8k · 2
10p· 3
p.
Other constants used in the proof of Proposition 11 may be estimated as follows ε
2= µ
pλ
p8
pA
pε
02
−p
q
≥ 3λ 2A
p, C
1= (1 − λ)
(1−p)q p
ε
−q p
2
≤ (1 − λ)
(1−p)qp2A 3λ
q,
C
2≤ p − 1
q + 1 − p C
1and C
3= 2
q p
2
q+1−p
p
− 1
C
1≤ 2p
(q + 1 − p) ln 2 C
1. Hence we can for example take
δ := 48
−p2
min{p−1,1}
and C
0:= (1 − λ)
1−p2A 3λ
p2p (q + 1 − p) ln 2
pq48
2p2 min{p−1,1}
,
then each term δ
(p−1)/p, δ
1/p, δ
1/pC
2C
0−q/pand C
3(δC
0)
−q/pis not greater than 48
−p≤ (16p3
p)
−1and (25) holds.
3 Lower bound - p ≤ 1
In this section we prove the lower bound in Theorem 2. We will also assume normalization (15) and use similar notation as for p > 1.
We begin with a result similar to Lemma 7.
Lemma 12. Let X be a nonnegative random variable such that EX
p= 1. Then for every A > 1 and u, v in a normed space (F, k k) we have
EkuX + vk
p≥ EkuX + vk
p1
{Xp≤A}≥ δ max{kuk
p, kvk
p}, where
δ := E(X
p− 1) 1
{1≤Xp≤A}. Proof. Since EX
p= 1 we have
δ = E(X
p− 1) 1
{1≤Xp≤A}≤ E(X
p− 1) 1
{1≤Xp}= E(1 − X
p) 1
{Xp≤1}≤ P(X
p≤ 1) ≤ P(X
p≤ A). (29) The triangle inequality yields kuX + vk ≥
kukX − kvk
. Thus, it suffices to prove E
kukX − kvk
p
1
{Xp≤A}≥ δ max{kuk
p, kvk
p}. (30) If u = 0 then this inequality is satisfied due to (29). In the case u 6= 0 divide both sides of (30) by kuk
pto see that it is enough to show
E|X − t|
p1
{Xp≤A}≥ δ max{t
p, 1} for t ≥ 0.
To prove this inequality let us consider two cases. First assume that t ∈ [0, 1]. Then we have
E|X − t|
p1
{Xp≤A}≥ E|X − t|
p1
{1≤Xp≤A}≥ E(X
p− t
p) 1
{1≤Xp≤A}≥ E(X
p− 1) 1
{1≤Xp≤A}= δ = δ max{t
p, 1}.
In the case t > 1 it suffices to note that
E|X − t|
p1
{Xp≤A}≥ E|X − t|
p1
{Xp≤1}≥ E(t
p− X
p)1
{Xp≤1}≥ E(t
p− t
pX
p) 1
{Xp≤1}= t
pE(1 − X
p) 1
{Xp≤1}≥ δt
p= δ max{t
p, 1}, where the last inequality follows from (29).
As a consequence, in the same way as in Lemma 8, we derive the following estimate.
Lemma 13. Let r.v.’s X
1, X
2, . . . satisfy (15) and (4). Then for any vectors v
0, v
1, . . . , v
n∈ F we get
E
n
X
i=0
v
iR
ip
≥ δ
2max
1≤i≤n
kv
ik
p≥ δ
2n
n
X
i=1
kv
ik
p.
Lemma 14. Suppose that random variables X
1, X
2, . . . satisfy assumptions (15) and (3).
Then for all vectors v
1, v
2, . . . in (F, k k) we have
P
n
X
i=0
v
iR
ip
≥ t
1 − λ
n
X
i=0
λ
ikv
ik
p!
≤ 1
√
t for t > 0.
Proof. Note that
E
n
X
i=0
v
iR
ip/2
≤
n
X
i=0
kv
ik
p/2ER
p/2i≤
n
X
i=0
λ
ikv
ik
p/2. By the Cauchy-Schwarz inequality we get
n
X
i=0
λ
ikv
ik
p/2!
2≤
n
X
i=0
λ
in
X
i=0
λ
ikv
ik
p≤ 1 1 − λ
n
X
i=0
λ
ikv
ik
p. Thus, using Chebyshev’s inequality we arrive at
P
n
X
i=0
v
iR
ip
≥ t
1 − λ
n
X
i=0
λ
ikv
ik
p!
≤ P
n
X
i=0
v
iR
ip/2
≥ √ t
n
X
i=0
λ
ikv
ik
p/2
≤ √ t
n
X
i=0
λ
ikv
ik
p/2!
−1E
n
X
i=0
v
iR
ip/2
≤ 1
√ t .
Our next lemma is in the spirit of Lemma 10, but it has a simpler proof.
Lemma 15. Let Y, Z be random vectors with values in a normed space (F, k k) such that EkZk
p1
{kY kp≥18EkZkp}
≤ 1
8 EkZk
p. Then
EkY + Zk
p≥ EkY k
p+ 1
2 EkZk
p. Proof. We have for any u, v ∈ F , ku + vk
p≥
kuk − kvk
p
≥ kuk
p− kvk
p, therefore EkY + Zk
p≥ E(kY k
p+ kZk
p− 2kZk
p) 1
{kY kp≥18EkZkp}
+ E(kY k
p+ kZk
p− 2kY k
p) 1
{kY kp<18EkZkp}≥ EkY k
p+ EkZk
p− 2EkZk
p1
{kY kp≥18EkZkp}
− 2EkY k
p1
{kY kp<18EkZkp}≥ EkY k
p+ EkZk
p− 2 · 1
8 EkZk
p− 2 · 1
8 EkZk
p= EkY k
p+ 1
2 EkZk
p.
The proof of the lower bound for p ≤ 1 is similar to the proof for p > 1 and it relies on a proposition similar to Proposition 11.
Proposition 16. Let 0 < p ≤ 1 and r.v.’s X
1, X
2, . . . satisfy assumptions (15), (3) and (4). Then for any vectors v
0, v
1, . . . , v
nin a normed space (F, k k) and any integer k ≥ 1 we have
E
n
X
i=0
v
iR
ip
≥ ε
0kv
0k
p+
n
X
i=1
ε
1k − c
ikv
ik
p, where ε
0= δ/8, ε
1= δ
3/8 and
c
i= 0 for 1 ≤ i ≤ k − 1, c
i= Φ
i
X
j=k
λ
jfor i ≥ k and Φ = 2
8A 1 − λ λ
k−2.
Proof. For n ≤ k the assertion follows by Lemmas 12 and 13, since ε
0≤ δ/2 and ε
1/k ≤ ε
1/n ≤ δ
2/(2n). For n ≥ k we proceed by induction on n.
Case 1. ε
0kv
0k
p≤ Φ P
n+1i=k
λ
ikv
ik
p.
In this case the induction step is the same as in the proof of Proposition 11.
Case 2. ε
0kv
0k
p> Φ P
n+1i=k
λ
ikv
ik
p. Let us define the set
A
k:= {X
1p≤ A, R
p2,k≤ 4λ
2k−2}.
By the induction hypothesis we have
E
n+1
X
i=0
v
iR
ip
1
Ω\Ak≥ ε
0E
k
X
i=0
v
iR
ip
1
Ω\Ak+
n+1
X
i=k+1
ε
1k − c
i−kEkv
iR
kk
p1
Ω\Ak. (31) By Chebyshev’s inequality and (3) we get
P(R
p2,k> 4λ
2k−2) ≤ ER
p/22,k2λ
k−1≤ 1
2 , (32)
in particular P(A
k) > 0. Let Y, Y
0, Z be defined as in the proof of Proposition 11. As in (19) we show that Lemma 12 yields EkZk
p≥ δkv
0k
p. We have kY k
p≤ 4Aλ
2k−2kY
0k
p, variables Y
0and Z are independent and Y
0has the same distribution as P
n+1i=k
v
iR
k+1,i. Thus,
EkZk
p1
{kY kp≥18EkZkp}
≤ EkZk
p1
{4Aλ2k−2kY0kp≥δ8kv0kp}
= EkZk
pP
kY
0k
p≥ 1
4Aλ
2k−2ε
0kv
0k
p≤ EkZk
pP
n+1
X
i=k
v
iR
k+1,ip
≥ 2
61 − λ
n+1
X
i=k
λ
i−kkv
ik
p!
≤ 1
8 EkZk
p,
where the second inequality follows by the assumptions of Case 2 and the definition of Φ and the last one by Lemma 14. Hence, Lemma 15 yields
EkY + Zk
p≥ EkY k
p+ 1
2 EkZk
p. Thus
E
n+1
X
i=0
v
iR
ip
1
Ak≥ 1 2 E
k−1
X
i=0
v
iR
ip
1
Ak+ E
n+1
X
i=k
v
iR
ip
1
Ak. (33) Using Lemma 12 and (32) we obtain
E
k−1
X
i=0
v
iR
ip
1
Ak≥ δkv
0k
pP(R
2,k≤ 4λ
2k−2) ≥ δ 2 kv
0k
p. Since ε
0≤
14and ε
0≤ δ/8 it follows that
1 2 E
k−1
X
i=0
v
iR
ip
1
Ak≥ ε
0kv
0k
p+ ε
0E
k−1
X
i=0
v
iR
ip
1
Ak. (34)
By the induction assumption we obtain
E
n+1
X
i=k
v
iR
ip
1
Ak≥ ε
0Ekv
kR
kk
p1
Ak+
n+1
X
i=k+1
ε
1k − c
i−kEkv
iR
kk
p1
Ak. (35) Combining (33), (34) and (35) we arrive at
E
n+1
X
i=0
v
iR
ip
1
Ak≥ ε
0kv
0k
p+ ε
0E
k−1
X
i=0
v
iR
ip
1
Ak+ ε
0Ekv
kR
kk
p1
Ak+
n+1
X
i=k+1
ε
1k − c
i−kEkv
iR
kk
p1
Ak≥ ε
0kv
0k
p+ ε
0E
k
X
i=0
v
iR
ip
1
Ak+
n+1
X
i=k+1
ε
1k − c
i−kEkv
iR
kk
p1
Ak. Combining this inequality with (31) yields
E
n+1
X
i=0
v
iR
ip
≥ ε
0kv
0k
p+ ε
0E
k
X
i=0
v
iR
ip
+
n+1
X
i=k+1
ε
1k − c
i−kEkv
iR
kk
p≥ ε
0kv
0k
p+ ε
1k
k
X
i=1
kv
ik
p+
n+1
X
i=k+1
ε
1k − c
i−kkv
ik
p≥ ε
0kv
0k
p+
n+1
X
i=1
ε
1k − c
ikv
ik
p, where in the second inequality we used Lemma 13.
We are now ready to establish the lower L
p-bound for p ≤ 1.
Proof of the lower bound in Theorem 2. To show the lower bound let us choose k such that kλ
2k−2≤ δ
3(1 − λ)
22
12A . Then
c
i≤ Φ λ
k1 − λ = 2
8Aλ
2k−2(1 − λ)
2≤ ε
12k . Therefore, Proposition 16 implies
E
n
X
i=0
v
iR
ip
≥ δ
8 kv
0k
p+ δ
316k
n
X
i=1
kv
ik
p≥ δ
316k
n
X
i=0
kv
ik
p.
4 Upper bounds
The upper bound in Theorem 2 immediately follows by the inequality (a + b)
p≤ a
p+ b
p, a, b ≥ 0, p ∈ (0, 1]. To get the upper bound in Theorem 3 we prove the following slightly more general result.
Proposition 17. Let p > 0 and X
1, X
2, . . . be independent random variables such that E|X
i|
p< ∞ for all i and
∀
1≤k<dpe∃
λk<1∀
i(E|X
i|
p−k)
1/(p−k)≤ λ
k(E|X
i|
p−k+1)
1/(p−k+1). (36) Then for any vectors v
0, v
1, . . . , v
nin a normed space (F, k k) we have
E
n
X
i=0
v
iR
ip
≤ C(p)
n
X
i=0
kv
ik
pE|R
i|
p, (37)
where C(p) = 1 for p ≤ 1 and for p ≥ 1,
C(p) = 2
p1 + C(p − 1) λ
p−111 − λ
p−11!
≤ 2
pC(p − 1) 1 − λ
p−11. Proof. We have k P
ni=0
v
iR
ik ≤ P
ni=0
kv
ik|R
i| and |R
i| = Q
ij=1
|X
j|, so it is enough to consider the case when F = R, v
k≥ 0 and variables X
jare nonnegative. Since it is only a matter of normalization we may also assume that EX
ip= 1 for all i.
We proceed by induction on m := dpe. If m = 1, i.e. 0 < p ≤ 1 then the assertion easily follows, since (x + y)
p≤ x
p+ y
p, x, y ≥ 0.
Suppose that m > 1 and (37) holds in the case p ≤ m. Take p such that m < p ≤ m + 1.
Observe that
(x + y)
p≤ x
p+ 2
p(yx
p−1+ y
p) for x, y ≥ 0. (38) Indeed, either x ≤ y and then (x + y)
p≤ 2
py
p, or 0 ≤ y < x and then by the convexity of x
p, ((x + y)
p− x
p)/y ≤ ((2x)
p− x
p)/x = (2
p− 1)x
p−1.
We have by (38)
E
n
X
i=0
v
iR
ip
≤ E
n
X
i=1
v
iR
ip
+ 2
p
v
0E
n
X
i=1
v
iR
ip−1
+ v
p0
. Iterating this inequality we get
E
n
X
i=0
v
iR
ip
≤ v
pnER
np+ 2
p
n−1
X
k=0
v
kER
k nX
i=k+1
v
iR
i!
p−1+
n−1
X
i=0