• Nie Znaleziono Wyników

Under mild contractivity hypotheses the sequence Xnconverges in law to a random variable S, which is the unique solution of the random difference equation S =d AS + B

N/A
N/A
Protected

Academic year: 2021

Share "Under mild contractivity hypotheses the sequence Xnconverges in law to a random variable S, which is the unique solution of the random difference equation S =d AS + B"

Copied!
13
0
0

Pełen tekst

(1)

D. BURACZEWSKI, E. DAMEK, AND J. ZIENKIEWICZ

Abstract. We consider the stochastic difference equation on R Xn= AnXn−1+ Bn, n≥ 1,

where (An, Bn)∈ R × R is an i.i.d. sequence of random variables and X0is an initial distribution.

Under mild contractivity hypotheses the sequence Xnconverges in law to a random variable S, which is the unique solution of the random difference equation S =d AS + B. We prove that under the Kesten-Goldie conditions

n→∞lim E|Xn|α

n = αmαC, where Cis the Kesten-Goldie constant

C= lim

t→∞tαP[

|S| > t] ,

α is the Cramer coefficient of log|A1|, mα =E[|A1|αlog|A1|] and E|X0|α<∞. Thus, on one side we describe behavior of the αth moments of the process{Xn}, and on the other we obtain an alternative formula for C. The results are further extended to a class of Lipschitz iterated systems and to a multidimensional setting.

1. Introduction

1.1. The random difference equation. We consider the stochastic difference equation onR

(1.1) Xn= AnXn−1+ Bn, n≥ 1

where (An, Bn)∈ R×R is a sequence of i.i.d. (independent identically distributed) random variables and X0∈ R is an initial distribution. The generic element of the sequence (An, Bn) will be denoted by (A, B). Under mild contractivity hypotheses the sequence Xn converges in law to a random variable S, which is the unique solution of the random difference equation

(1.2) S =dAS + B, S independent of (A, B);

see [18, 29]. Moreover, the solution S can be explicitly written as

(1.3) S =

n=0

A1. . . AnBn+1.

There is a considerable interest in study various aspects of the iteration (1.1) and, in particular, the tail behaviour of S. The story started with the seminal paper of Kesten [24] who formulated reasonable conditions for S to have a heavy tail in the multidimensional case of (1.1) when An are matrices with positive entries and Bn are vectors. For d = 1 this means existence of the limit

(1.4) lim

t→∞tαP[|S| > t] = C.

Kesten’s proof being very general is quite technical and so for d = 1 a specific approach was needed.

Some work was done by Grincevicius [20], but for the complete picture we owe to Goldie [19].

The authors were partially supported by NCN grant UMO-2011/01/M/ST1/04604. We thank the reviewer for their constructive comments, which helped us to improve the manuscript.

1

(2)

Theorem 1.5. [19] Assume that the following Kesten-Goldie conditions are satisfied

• the law of log |A| conditioned on A ̸= 0 is non-arithmetic;

• there is α > 0 such that E|A|α= 1,E[|A|αlog+|A|] < ∞, E|B|α<∞;

• P[Ax + B = x] < 1 for every x ∈ R.

IfP[A ≥ 0] = 1, then

C+= lim

t→∞tαP{S > t} = 1 αmαE[

((AS + B)+)α− (AS+)α] , C= lim

t→∞tαP{S < −t} = 1 αmαE[

((AS + B))α− (AS)α] (1.6)

for mα=E[|A|αlog|A|] < ∞.

IfP[A < 0] > 0, then

C+= C = 1 2αmαE[

|AS + B|α− |AS|α] . Moreover, C= C++ C> 0.

In the present paper we prove that

(1.7) C= lim

n→∞

1

αmαnE|Xn|α,

provided X0is such thatE|X0|α<∞, see Theorem 2.5. In particular, the sequence of αth moments of Xn increases linearly while for β < α, βth moments of Xn are uniformly bounded and for β > α they grow exponentially. We also get analogous expressions for the constants C+ and C. Notice that formulae (1.6) cannot be used to determine the value of C, since it depends on S, which in general is unknown. In contrast, formulae (1.7) allows to approximate C as a limit of an expression depending on the input sequence {(An, Bn)}n∈N. Below, in Section 1.4, we presents some other expressions for C.

In the case when A ≥ 0, B = 1 and X0 = 0, (1.6) can be found in Bartkowiak, Jakubowski, Mikosch, Winterberger [5]. Their simple argument generalizes far beyond the setting of [5] and the aim of the present paper is to shed light on that.

IfP[A < 0] > 0, then

C+= C = lim

n→∞

1

2αmαnE|Xn|α> 0

In this case the support of ν isR (see e.g [9]). If P[A ≥ 0] = 1 then an analogous argument gives C+= lim

n→∞

1

αmαnE((Xn)+)α, C= lim

n→∞

1

αmαnE((Xn))α. (1.8)

Neither of formulae (1.6), (1.8) guarantees positivity of the limiting constant so it must be comple- mented by an additional argument. There is such an argument and it is related to the support of ν.

IfP[A ≥ 0] = 1 then suppν is either R or a half line (see e.g [9], Theorem 2.5.5 for a simple proof).

If we assume additionally that suppν =R and P[A > 0] = 1 then both constants C+, Care strictly positive [22]; see also an alternative proof in [9], Theorem 2.4.7. Moreover, Proposition 2.5.4 in [9]

gives a simple criterion for a half line to be contained in suppν i.e. the corresponding tail constant to be strictly positive. Suppose that there are (a1, b1), (a2, b2)∈ suppµ such that a1< 1 < a2 and

(1.9) b2

1− a2

< b1

1− a1

(3)

then [c,∞) ⊂ suppν, for some c ∈ R, and C+> 0. If

(1.10) b2

1− a2

> b1

1− a1

then (−∞, c] ⊂ suppν, for some c ∈ R, and C> 0.

1.2. Lipschitz recursions. Formula (1.7) remains valid for one dimensional iterated function sys- tems i.e. recursions of the type

Xn= ψn(Xn−1), n = 1, 2, ..., X0= x,

where ψ are random Lipschitz functions. Such systems for certain affine like functions where already studied by Goldie [19] who adopted to them the approach working for (1.1). To generalize (1.7) we assume that IFS is close to the affine recursion in the following sense

Ax− B ≤ ψ(x) ≤ Ax + B for some A, B satisfying Kesten-Goldie conditions; (see Theorem 2.5).

Beginning from the early nineties iterated function systems of iid Lipschitz maps (IFS) on a complete metric space have attracted a lot of attention: Alsmeyer [1], Arnold and Crauel [3], Brofferio and Buraczewski [6], Diaconis and Friedman [14], Duflo [15], Elton [16], Henion and Herv´e [23], Mirek [27] and they still do. In particular, it seems that modeling them after (1.1) has been very fruitful, see Alsmeyer [1] and Mirek [27]. Following this path we prove (1.7) and (1.8) in the framework of such IFS. The details will be given in Section 2.

1.3. Multidimensional case. Our techniques can be applied to (1.1) with multidimensional sim- ilarities An instead of numbers [8] and, more general, to IFS on Rd modeled on similarities [27].

This covers the case when An, Bnare complex valued i.e. (1.1) is related to physical models via the complex valued smoothing transform [11, 26]. Then, as [8, 27] show the heavy tail behavior can be observed “in directions”. Then the role of C+, C is played by a measure σµ on the unit sphere Sd−1 i.e. for a suitable W ⊂ Sd−1, we have

tαP{|S| > t, S/|S| ∈ W } → σµ(W ) when t→ ∞. Here the above is complemented by

(1.11) σµ(W ) = lim

n→∞

1 αmαnE[

|Xn|α1W

].

The process Xn and the tail behavior of S in the multidimensional case of (1.1) was studied by various authors in various contexts and regular variation of S has been proved: [2, 10, 21, 22, 25], but beyond similarities nothing like (1.11) has been obtained yet. A more detailed discussion of that is postponed to Section 4.

1.4. Previous results. The Kesten-Goldie theorem found enormous number of applications both in pure and applied mathematics, see [4, 9, 14, 28] and the comprehensive bibliography there. Therefore descriptions of the limiting constant C are vital. We refer to [12, 17] for recent results. None of them, however, treats the case of general signed A and B or any aspect of the multidimensional case. The theory behind [12, 17] is quite advanced and the proofs by no means simple. Comparing to those papers, formula (1.7) is of a different nature, with weaker assumptions and an elementary proof.

Let us start with [17]. The assumption there is: Ak, Bk, k∈ Z are positive and Ak independent of Bk. Then

C= CCESZ,

(4)

where Cis “so-called ” Cramer-Lundberg constant1and CESZ = ˜E[ ∑

k<0

A0· · · Ak+1Bk+ B0+∑

k>0

A−11 · · · A−1k Bk ]α

.

For k < 0 the expectation is taken under eP(·|A0· · · Ak+1≤ 1, ∀k < 0), eP being the product measure µNwhile for k≥ 0 the expectation is taken under Pα(·|A1· · · Ak> 1,∀k > 0), Pαbeing the product measure ˜µNαwhere ˜µα(U ) =E[Aα1U(A)].

Another expression for Ccan be found in [12]. Under assumption that Ak, Bk≥ 0 we have

(1.12) C= 1

αmαEα

[Zα1τ =] , where τ is a regeneration time for the Markov chain

Xn+1= An+1Xn+ Qn+1, Z =

n=1

A−11 · · · A−1n Bn+ X0,

the expectationEαis taken with respect to the product measure ˜µNαand X0is distributed according to a minorisation measure needed for the regeneration scheme to work. In fact, the authors deal with the Letac model

Xn= Anmax(Dn, Xn−1) + Bn.

and they obtain for it a formula for C= C+ in the spirit of (1.12). (1.12) is a corollary from their main result specified to Dn= 0 and Bn ≥ 0 i.e. when the Letac model becomes the affine recursion.

(1.12) has been already used to simulate the Kesten-Goldie constant [13].

2. The main theorem

2.1. Iterated Lipschitz maps. Let (X, d) be a complete separable metric space with Borel - σ field B(X) and unbounded metric d. A temporally homogeneous Markov chain (Xn)n≥0 with the state spaceX is called iterated function system of iid Lipschitz maps (IFS)2, if it satisfies a recursion of the form

(2.1) Xn= ψ(θn, Xn−1), for n≥ 1,

where

• X0, θ1, θ2, ... are independent random elements on a common probability space Ω

• θ1, θ2, ... are identically distributed and taking values in a measurable space (Θ,A)

• ψ : (Θ × X, A ⊗ B(X)) 7→ (X, B(X)) is jointly measurable and Lipschitz continuous in the second argument i.e.

d(ψ(θ, x), ψ(θ, y))≤ Cθd(x, y), for all x, y∈ X, θ ∈ Θ and a suitable Cθ∈ R+.

We have then,

Xn= ψn◦ . . . ◦ ψ1(X0) =: ψn,1(X0), where ψi(x) = ψ(θi, x).

We will also write ψ(θ, x) = ψ(x) for short. Let L(ψ), L(ψn,1) be the Lipschitz constants of ψ, ψn,1

respectively.

The following theorem by Elton [16] gives sufficient conditions for existence of the stationary distribution for the Markov chain (Xn)n≥1.

1That isP[M > t] ∼ Ct−α, where M = supnA1. . . An 2We will also use the abbreviation: Lipschitz iterated system

(5)

Theorem 2.2. Suppose thatE log+L(ψ) <∞, E log+d(x0, ψ(θ, x0)) <∞ for some x0 and

nlim→∞

1

nlog L(ψn,1) < 0 a.s.3 Then

• Xn converges in law to a random variable S with law ν.

• ν is the unique stationary distribution of (Xn)n≥0

• the equation S = ψ(S) holds true in law.

In this context a natural question arises: under which conditions S has a heavy tail behavior. Lip- schitz iterative systems have been recently considered by Alsmeyer [1] and some sufficient conditions are provided there. In a considerable generality they allow to obtain

tlim→∞

logP[d(x0, X) > t]

log t =−α.

Under further specific hypotheses some more was proved in [1] (see the example with AR(1) in section 3). However, conditions of Alsmeyer are to weak to imply Theorem 2.5 below. Therefore here we work within a more restrictive setting than [1]. Our standing assumption is:

X = R and there is a random variable (A, B) ∈ R2 satisfying Kesten-Goldie conditions such that

(2.3) Ax− B ≤ ψ(x) ≤ Ax + B, x ∈ suppν.

Condition (2.3) has a very natural geometrical interpretation. It means that the graph of ψ lies between the graphs of Ax− B and Ax + B for every x ∈ suppν. This allows us to think that the recursion is close to the affine recursion.

To get the idea what is the meaning of (2.3) the reader may think of the recursion ψ(θ, x) = max{Ax, B}, where θ = (A, B) ∈ R+× R = Θ (see Section 3). Notice that if X0= x≥ 0 then all the iterations stay positive which implies that suppν⊂ [0, ∞). We have then

0≤ max(Ax, B) − Ax ≤ B+, x≥ 0.

Notice that for the max recursion (2.3) is not satisfied onR, but only on [0, ∞) ⊇ suppν. Assumption (2.3) has an important consequence. It gives formulae for the constant C, C+, C analogous to those by Goldie. It was first observed by Mirek who proved the following theorem.

Theorem 2.4. ([27]) Assume that ψ satisfies (2.3) and those of Theorems 1.5 and 2.2. Then

tlim→∞tαP[|S| > t] = 1

αmαE [|ψ(S)|α− |AS|α] = C. Moreover, if A≥ 0 a.s., then

tlim→∞tαP[S > t] = 1 αmαE[

(ψ(S)+)α− ((AS)+)α]

= C+,

tlim→∞tαP[S < −t] = 1 αmαE[

(ψ(S))α− ((AS))α]

= C. IfP[A < 0] > 0 then

tlim→∞tαP[S > t] = lim

t→∞tαP[S < −t] = 1

2αmαE [|ψ(S)|α− |AS|α] = C+= C,

Remark 2.1. Theorem 2.4 is formulated in [27] under assumption thatP[M = 0] = 0, but the proof doesn’t make use of that.

3The convergence follows from the subadditive ergodic theorem

(6)

2.2. Main result. The main theorem of this paper is

Theorem 2.5. Suppose thatE log L(ψ) < 0, (2.3) and the assumptions of Theorem 1.5 are satisfied as well asE|X0|α<∞. Then

lim

n→∞

1

αmαnE|Xn|α= C. If A≥ 0 then

nlim→∞

1

αmαnE((Xn)+)α= C+ nlim→∞

1

αmαnE((Xn))α= C. IfP[A < 0] > 0, then

nlim→∞

1

2αmαnE|Xn|α= C+= C. Proof. Let us consider the backward process

Rxn = ψ1◦ · · · ◦ ψn(x), x∈ supp ν.

Notice that (2.3) implies that for every x∈ suppν

|Rxn| ≤

n k=1

|A1| · · · |Ak−1||Bk| + |A1| · · · |An||x|

(2.6)

To prove (2.6) we proceed by induction. For n = 1, (2.6) follows directly from (2.3). If x∈ suppν, then ψn+1(x)∈ suppν and so by induction

|Rxn+1| ≤

n k=1

|A1| · · · |Ak−1||Bk| + |A1| · · · |An||ψn+1(x)|

n k=1

|A1| · · · |Ak−1||Bk| + |A1| · · · |An|(|An+1||x| + |B|n+1)

which proves the claim. Let

R =˜

k=1

|A1| · · · |Ak−1||Bk|.

SinceE log |A| < 0, we have

E|Rxn|β ≤ E( ˜R)β<∞ for every n and β < α.

Then Rxn=dXnx∈ supp ν in law and Rxn converges a.s. to S [14]. Let bn=E|Rxn|α.

Notice that

bn+1=E|ψ1(Rxn◦ δ)|α,

where δ is the shift operator δ(ω1, ω2, . . . ) = (ω2, . . . ). If A is independent of Rnx, since EAα= 1, we have

bn=E|ARxn|α. By an elementary calculus lemma, it is enough to prove that

nlim→∞(bn+1− bn) =E [|ψ(S)|α− |AS|α] .

(7)

We have

bn+1− bn=E[

1(Rxn◦ δ)|α− |A1(Rxn◦ δ)|α]

→ E[

1(S)|α− |A1S|α] provided we can dominate

1(Rxn◦ δ)|α− |A1(Rxn◦ δ)|α by an integrable function. For α≤ 1 we write

1(Rxn◦ δ)|α− |A1(Rxn◦ δ)|α ≤ |ψ1(Rxn◦ δ) − A1(Rxn◦ δ)|α≤ |B1|α,

which is integrable. Notice that if x ∈ supp ν then Rxn◦ δ ∈ supp ν and so we can use (2.3). If α > 1 we use the inequality|aα− bα| ≤ α max(aα−1, bα−1)|a − b| and we have

1(Rxn◦ δ)|α− |A1(Rxn◦ δ)|α α max(|ψ1(Rxn◦ δ)|α−1,|A1(Rxn◦ δ)|α−1)

×|ψ1(Rnx◦ δ) − A1(Rxn◦ δ)|

≤ C(

|A1|α−1|(Rxn◦ δ)|α−1+|B1|α−1)

|B1|

≤ C(

|A1|α−1( ˜R◦ δ)α−1+|B1|α−1)

|B1|.

The latter variable is integrable:

E[

|A1|α−1|B1|( ˜R◦ δ)α−1+|B1|α]

=E[

|A1|α−1|B1|] E[

( ˜R◦ δ)α−1)

+E|B1|α, which is finite, becauseE(|A1|α−1|B1|) ≤ (

E(|A1|α−1)p)1/p(

E|B1|q)1/q

with p = αα−1, q = α. To conclude the result we define bn=E((Rxn)+)α=E[

Aα((Rxn)+)α]

and we repeat the above argument making use of the simple inequality| max(a, 0) − max(b, 0)| ≤ |a − b|.

 3. Positivity of the constants and examples

In this section we describe a few examples to which both Theorems 2.4 or 2.5 can be applied.

Neither of them guarantees strict positivity of Cor C+, C. This requires some further arguments;

see [7] for recent results.

3.1. The Letac Model. For A≥ 0, consider

ψ(x) = A max(D, x) + B

with A ∈ R+, B, D ∈ R. This model was already considered by Goldie [19]. Notice that the extremal recursion ψ(x) = max(Ax, B) is a particular case of it.

This is a Lipschitz recursion with the Lipschitz constant A. Assume that the Kesten-Goldie conditions are satisfied and additionallyE[

Aα|D|α]

<∞. Then Mirek’s scheme can be applied here if suppν⊂ [−c0,∞), c0≥ 0. Indeed, for x ≥ −c0

ψ(x)− Ax = AD− x + |D − x|

2 + B = A(D− x)++ B and a simple calculation shows that

(3.1) −AD+ B≤ ψ(x) − Ax ≤ AD++ Ac0+ B Therefore, by Theorems 2.4 and 2.5 we may conclude that

lim

t→∞P[S > t]tα= C+= lim

n→∞

1 αρnE[

((Xnx)+)α] . and

nlim→∞

1 αmαnE[

((Xnx))α]

= 0.

(8)

(3.1) can be used to get positivity of C+ provided the same holds for the recursion with ψ(x) = Ax− AD+ B. A sufficient and necessary condition for positivity of C+ is given in [7]. See also [12] and [19] for sufficient conditions for positivity of C+.

3.2. Lipschitz recursions modeled on the Letac model. For A ≥ 0, consider a Lipschitz transformation ψ that satisfies

A max(D1, x) + B1≤ ψ(x) ≤ A max(D2, x) + B2

with A∈ R+, B1, B2, D1, D2∈ R and suppose that

E log Lip ψ < 0 E log+(A(1 +|Di|) + |Bi|) < ∞ for i = 1, 2. Then assumptions of Theorem 2.2 are satisfied and

Xn+1x = ψ(θn+1, Xnx)

has a single stationary solution that does not depend on the starting point. Assume additionally that suppν⊂ [−c0,∞), c0≥ 0. Proceeding as before, we obtain that for x > 0

−AD1+ B1≤ ψ(x) − Ax ≤ AD+2 + B2+ Ac0

Suppose now the Kesten-Goldie conditions andE[

Aα(D1+ D2+)α+ (B1+ B2+)α]

<∞. Then the assumptions of Theorem 2.5 are satisfied and we may conclude that

tlim→∞P[|S| > t]tα= lim

t→∞P[S > t]tα= C+= lim

n→∞

1 αmαnE[

((Xnx)+)α] . and

tlim→∞

1

αmαnE((Xnx))α= 0.

In view of [7] positivity of C+ is equivalent to unboundedness of the support of the stationary solution. Again see [12] for some sufficient conditions for positivity of C+.

3.3. The AR(1)-model with ARCH(1) errors. This is a nonlinear model introduced by Engle and Weiss. It has received attention due to its relevance in Mathematical Finance where it is known as a relatively simple model that captures temporal variation of volatility in financial data sets. It is defined by the recursion

Xn= αXn−1+ (β + λXn2−1)1/2εn, n≥ 1

with (α, β, λ) ∈ R × R+× R+ and εn called innovations are assumed to be i.i.d symmetric and independent of the initial distribution X0. (See [1] for a nice description).

If X0 is symmetric then for every n, Xn is symmetric and so is the stationary distribution S provided the assumptions of Theorem 2.2 are satisfied and S does exist. It is so, if E log(|α| + λ1/2|ε|) < 0. Then

(3.2) S = αS + (β + λS2)1/2ε in law.

We can easily assume that α > 0 because S satisfies (3.2) with−α too. |S| is independent of sgnS so putting η = εsgnS we get a variable independent of|S| and so we may consider W = |S|2 that satisfies the equation

W = ψ(W )(α + λ1/2η)2W + 2αβηW1/2

(β + λW )1/2+ λW )1/2 + βη2 in law.

Using this equation Alsmeyer proved that

(3.3) lim

t→∞tP(S > t) = 1 2κmκE(

ψ(W )κ− (AW )κ)

> 0

(9)

where mκ=EAκlog A, A = (α + λ1/2η)2 with κ playing the role of the Cramer exponent.

Notice that

0≤ ψ(W ) ≤ (α + λ1/2η)2W + β(2αλ−1/2|η| + η2).

Therefore, if

Wn= ψn(Wn−1), W0= 0 then by Theorem 2.5

tlim→∞tP[R > t] = 1 2 lim

t→∞tκP[W > t] = lim

n→∞

1

2κmκnEWnκ

4. Similarities

In this section we consider a multidimensional version of (1.1) i.e.

Xnx= ψn(Xnx−1) = AnXnx−1+ Bn∈ Rd,

assuming that the matrices An are similarities, An∈ R+× O(Rd), Bn ∈ Rd and X0x= x∈ Rd. An element g∈ GL(Rd) is a similarity in the sense of Euclidean geometry, if

|gx| = |g||x|, x ∈ Rd,

where the norm of a linear transformation g of Rd is denoted |g|. If g is a similarity, then |g|1g preserves the norm on Rd. Hence the subgroup G ⊆ GL(Rd) of all the similarities is isomorphic to the direct product of the multiplicative group of real positive numbersR+ and the orthogonal group O(Rd). We will write G =R+× O(Rd).

Let H =Rdo G be the group of transformations

Rd∋ x → hx = gx + q ∈ Rd,

where h = (q, g) with g∈ G, q ∈ Rd. Then (Bn, An) is an H valued i.i.d. sequence with distribution µ. (Here Bn∈ Rd and An ∈ G.)

IfE log |A| < 0 and E log+|B| < ∞, then the assumptions of Theorem 2.2 are satisfied and the sequence Xn converges in law to a random variable S given by (1.3), which is the unique solution of the random difference equation (1.2). The main result of [8] shows that under appropriate assumptions the random variable S is regularly varying. The law of S will be denoted by ν.

We are going to consider a little bit more general situation than in [8]. Namely we allow A to take values in G∪ {0}. Then (B, A) ∈ H ∪ {0} and the latter will be our standing assumption. This setting has not been considered in [8] but the basic result of [8] describing the tail of S (Theorem 4.1 below) holds true.

Let ¯µ be the law of A, let Gµ¯ be the closed group generated by the support of ¯µ restricted to G (we avoid a possible non zero mass of ¯µ at A = 0) and let Kµ¯ = Gµ¯∩ O(Rd). Then Gµ¯=R+× Kµ¯. The only place in [8], where the group structure interferes is a renewal theorem on Gµ¯ applied to the probability measure|A|αµ and a possible positive mass of ¯¯ µ at zero doesn’t play any role.

Set

νg(f ) =|g|−α(gν)(f ) =|g|−α

Rd

f (gx)ν(dx) =|g|−αEf(gS),

where S is the solution to (1.2). For x∈ Rd, x̸= 0 by x the projection of x onto the unit sphere Sd−1, i.e. x = x/|x|.

The following result describes the tail of ν, [8].

Theorem 4.1. Assume that the action of suppµ onRd has no fixed point,

• E[log |A|] < 0,

• there is α > 0 such that E|A|α= 1,

(10)

• E[|A|αlog+|A|] and E|B|α are both finite,

• the law of |A| conditioned on R+ is not arithmetic.

Then there is a Radon measure Λ on Rd\ {0} such that for every bounded continuous function F that vanish in a neighborhood of zero

(4.2) lim

|g|→0,g∈ ¯Gµ

⟨F, νg⟩ = ⟨F, Λ⟩ = 1 αmα

Gµ¯

|g|−αE[

F (gS)− F (gAS)] dλ(g).

where λ = drr × dk is the Haar measure on Gµ¯ such that

Kµ¯dk = 1. Moreover, there is a finite Kµ¯ - invariant measure σµ onSd−1 such that, in radial coordinates Λ = σµrαdrα+1, i.e.

(4.3) ⟨F, Λ⟩ =

R+×Sd−1

F (rω) α

rα+1dr σµ(dω).

Finally, (4.2) holds for every function F such that 0 /∈ suppF , the measure Λ of the set of discontinuities of F is 0 and for some ε > 0

sup

x̸=0

(|x|−α| log |x||1+ε|F (x)|)

<∞.

In the multidimensional case the measure σµ is a straightforward analogue of the limiting con- stants Cand C+. Indeed when d = 1, then σµ= C+δ1+ Cδ−1. Then it is natural to describe σµ

more carefully, not only the total mass of it.

Our main result in the multidimensional case is the following

Theorem 4.4. Suppose that the hypotheses of Theorem 4.1 are satisfied. Then for any continuous, Kµ¯ invariant function f onSd−1 we have

(4.5) lim

n→∞

1 αmαn· E[

|Xnx|αf (Xxn)]

=⟨f, σµ⟩ =

Sd−1

f (ω)σµ(dω).

Proof. Let F be H¨older continuous with ξ < min(α, 1) and compactly supported in Rd\ {0}. We fix ε > 0 and consider the family of functions

χs,F =|g|−sE[

F (gS)− F (gAS)]

for α− ε ≤ s ≤ α. The functions χs,F are all directly integrable (see [8]) and

s,F| ≤ |χα−s,F| + |χα,F|.

Therefore, lim

s→α

Gµ¯

χs,F(g) dλ(g) =

Gµ¯

|g|−α(

E(F (gS) − F (gAS)))

dλ(g) = αmα⟨F, Λ⟩.

Let now F (rω) = ϕ(r)f (ω), f (kω) = f (ω), ϕ, f H¨older continuous, suppϕ⊂ R+. Then

(4.6)

G¯µ

|g|−s(

E(F (gS) − F (gAS)))

dλ(g) =

R+

r−s(E(F (rS) − F (r|A|S))) dr r . and for s < α

R+

r−sEF (rS)dr r =

R+

r−sE[

ϕ(r|S|)f(S)]dr r =E

[ f (S)

R+

r−sϕ(r|S|)dr r

]

=E [

f (S)

c|S|−1

r−sϕ(r|S|)dr r

]

=E [

f (S)

c

r−s|S|sϕ(r)dr r

]

=E[

|S|sf (S)] ∫

R+

r−sϕ(r)dr r =E[

|ψ(S)|sf (ψ(S)) ] ∫

R+

r−sϕ(r)dr r ,

(11)

where ψ(S) = AS + B with (A, B) independent of S. Analogously we proceed with the second term.

Therefore, ∫

R+

χs,F(r)dr r =

R+

r−sϕ(r)dr r · E[

|ψ(S)|sf (ψ(S))− |AS|sf (AS) ]

and so (4.7)

R+

χα,F(r)dr r = lim

s→α

R+

χs,F(r)dr r =

R+

r−αϕ(r)dr r E[

|ψ(S)|αf (ψ(S))− |AS|αf (AS) ]

provided we can dominate

(4.8) ψ(S)|sf (ψ(S))− |S|sf (S) = (|ψ(S)|s− |AS|s)f (ψ(S)) +|AS|s(

f (ψ(S))− f(AS)) by an integrable function independently of s. As in the proof of Theorem 2.5 we have

|ψ(S)|s− |AS|s f (ψ(S)) ≤C|B|s if s≤ 1 and |ψ(S)|s− |AS|s f (ψ(S)) ≤C

(|B|s+|A|s−1|B||S|s−1)

if s > 1.

Therefore, the first term in (4.8) is bounded uniformly by C(1 +|B|α) or by C (

1 + |B|α +

|A|α−1|B||S|α−1)

which is integrable. For the second term in (4.8) we have

|A|α|S|α f (ψ(S))− f(AS)) ≤C|A|α|S|α|ψ(S) − AS|ξ

=C|A|α|S|α ψ(S)

|ψ(S)|− AS

|AS| ξ

=C|A|α−ξ|S|α−ξ|ψ(S)|−ξ ψ(S)|AS| − |ψ(S)|AS ξ.

|ψ(S)|AS| − |ψ(S)|AS ξ ψ(S)|AS| − ψ(S)|ψ(S)| ξ + ψ(S)|ψ(S)| − |ψ(S)|AS ξ

≤2|ψ(S)|ξ ψ(S)− AS ξ

≤2|ψ(S)|ξ|B|ξ. Hence

(4.9) |A|α|S|α f (ψ(S))− f(AS)) ≤2C|A|α−ξ|S|α−ξ|B|ξ<∞.

Since ν does not have atoms (see e.g. [9]), P[S = 0 or ψ(S) = 0] = 0 and so the above argument holds true on a set of the full measure. Finally, from (4.3) and (4.7) we obtain that for δ-H¨older functions f

⟨f, σµ⟩ = 1 αmαE(

|ψ(S)|αf (ψ(S))− |AS|αf (AS)) Let

bn=E[

|Xnx|αf (Xxn)]

=E[

|Rxn|αf (Rxn)] , where Rxn =∑n

k=1A1. . . Ak−1Bk+ A1· · · Anx, with the convention that if Rn = 0 then the function under the expectation is 0. Then

bn =E[

|A1(Rxn◦ δ)|αf (A1(Rxn◦ δ))] , becauseE|A1|α= 1 and A1(Rxn◦ δ) = Rxn◦ δ. Therefore,

(4.10) bn+1− bn=E[

|Rxn+1|αf (Rxn+1)− |A1(Rnx◦ δ)|αf (A1(Rxn◦ δ))]

(12)

and

bn+1− bn 1 αmαE[

|ψ(S)|αf (ψ(S))− |AS|αf (AS)]

provided we can dominate the integrand in (4.10) by an L1function. We proceed as in the proof of the previous proposition. We have

|Rxn+1|αf (Rxn+1)− |A1(Rxn◦ δ)|αf (A1(Rxn◦ δ)) =(

|Rxn+1|α− |A1(Rxn◦ δ)|α)

f (Rxn+1) +|A1(Rnx◦ δ)|α(

f (Rxn+1)− f(A1(Rxn◦ δ))) . But as in the proof of Theorem 2.5

(4.11) |Rxn+1|α− |A1(Rnx◦ δ)|α ≤ |B1|α, if α≤ 1 or

(4.12) |Rxn+1|α− |A1(Rxn◦ δ)|α ≤ α(

|B1|α+|A1|α−1|B1|( ˜Rx)α−1 )

, if α > 1 which is integrable. Proceeding as above, we have

(4.13) |A1(Rxn◦ δ)|α(

f (Rxn+1)− f(A1(Rxn◦ δ))) ≤ C|A1|α−ξ|B1|ξ( eRx)α−ξ

on the set{Rn+1, A1(Rn◦ δ) ̸= 0}. But since |Rn+1x − A1Rxn◦ δ| = |B1|, if one of the terms Rn+1

or A1Rxn◦ δ is zero then the other is equal B1.

Therefore, (4.5) holds for H¨older functions. It can be easily extended to continuous functions because for any f ∈ C(Sd−1) and η > 0 we can find fη that is ξ-H¨older and ∥fη− f∥L < η. Then (4.14)

1 αmαn· E[

|Xnx|αf ( eXnx)]

1

αmαn· E[

|Xnx|αfη( eXnx)] ≤ η 1 αmαn· E[

|Xnx|α]≤ Cη by (4.5) applied to the constant function 1 onSd−1. Hence (4.5) follows for f ∈ C(Sd−1). 

Theorem 4.4 can be proved in more general settings of Lipschitz iterated systems modeled on similarities (see Mirek [27]), however we are not going to give any further details in this paper.

References

[1] G. Alsmeyer. On the stationary tail index of iterated random Lipschitz functions. Preprint, arxiv.org/abs/1409.2663.

[2] G. Alsmeyer, S. Mentemeier. Tail behavior of sationary solutions of random difference equations: the case of regular matrices. Journal of Difference Equations and Applications 18(8):1305–1332, 2012.

[3] L.Arnold and H.Crauel. Iterated function systems and multiplicative ergodic theory, in Diffusion Theory and Related Problems in Analysis II, M.Pinsky and Wihstatz, eds. Birkhauser, Boston, 1992, pp. 283–305 [4] M. Babillot, P. Bougerol, and L. Elie. The random difference equation Xn= AnXn−1+ Bnin the critical

case. Ann. Probab., 25(1):478–493, 1997.

[5] K.Bartkiewicz, A.Jakubowski, T.Mikosch, O.Wintenberger. Stable limits for sums of dependent infinite variance random varibles. Prob. Theory Relat. Fields, 150: 337–372, 2011

[6] S. Brofferio, D. Buraczewski. On unbounded invariant measures of stochastic dynamical systems, Ann.

Probab. 43:1456-1492, 2015.

[7] D. Buraczewski, E. Damek. A simple proof for precise tail asymptotics of affine type Lipschitz recursions.

preprint.

[8] D. Buraczewski, E. Damek, Y. Guivarch, A. Hulanicki and R. Urban. Tail-homogeneity of stationary measures for some multidimensional stochastic recursions. Probab. Theory Related Fields, 145(3):385–420, 2009.

[9] D. Buraczewski, E. Damek, T. Mikosch. The Stochastic Equation X =dAX + B, work in progess.

[10] D. Buraczewski, E. Damek and M. Mirek. Asymptotics of stationary solutions of multivariate stochastic recursions with heavy tailed inputs and related limit theorems. Stoch. Proc. Appl. 122, 42–67, 2012.

[11] B. Chauvin, Q. Liu, N.Pouyanne. Limit distributions for multitype branching processes of m-ary search trees.

Ann. Inst. Henri Poincare Probab. Stat. 50(2), 628-654, 2014

(13)

[12] J. F. Collamore and A. N. Vidyashankar. Tail estimates for stochastic fixed point equations via nonlinear renewal theory. Stochastic Process. Appl., 123(9):3378–3429, 2013.

[13] J. F. Collamore, G.Diao and A. N. Vidyashankar. Rare event simulation for processes generated via sto- chastic fixed point equations. The Annals of Applied Probability, 24(5), 2143-2175.

[14] P. Diaconis, D. Freedman. Iterated random functions. SIAM Rev. 41 (1999), no. 1, 45–76 (electronic);

[15] M. Duflo Random Iterative Systems. Springer Verlag, New York, 1997.

[16] J. H. Elton. A multiplicative ergodic theorem for Lipschitz maps, Stoch. Process. Appl., 34, 39–47, 1990.

[17] N. Enriquez, C. Sabot, and O. Zindy. A probabilistic representation of constants in Kesten’s renewal theorem.

Probab. Theory Related Fields, 144(3-4):581–613, 2009.

[18] H. Furstenberg, H.Kesten. Products of random matrices Ann. Math. Statist. , 31: 457–469, 1960.

[19] C.M. Goldie. Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab., 1(1):126–

166, 1991.

[20] A.K. Grincevicius. On limit distribution for a random walk on the line. Lithuanian Math. J. 15 (1975), 580–589 (English translation).

[21] Y.Guivarc’h, E. Le Page. Spectral gap properties and asymptotics of stationary measures for affine random walks. Annales IHP, accepted, arXiv:1204.6004., 2012

[22] Y.Guivarc’h, E. Le Page. On the homogeneity at infinity of the stationary probability for an affine random walk. Contemporary Mathematics 631 (2015), 119-130

[23] H.Hennion, L.Herv´e. Central limit theorems for iterated random Lipschitz mappings. Ann. Probab., 32(3A), 1934-1984, 2004.

[24] H. Kesten. Random difference equations and renewal theory for products of random matrices. Acta Math., 131(1):207–248, 1973.

[25] C. Kl¨uppelberg, S. Pergamenchtchikov. The tail of the stationary distribution of a random coefficient AR(q) model. Ann. Appl. Probab. 14 (2004), no. 2, 971–1005.

[26] M. Meiners, S. Mentemeier. Solutions to complex smoothing equations. arXiv 1507.08043v1.

[27] M. Mirek. Heavy tail phenomenon and convergence to stable laws for iterated Lipschitz maps. Probab. Theory Related Fields 151(3-4), 705–734, 2011.

[28] S. T. Rachev and G. Samorodnitsky. Limit laws for a stochastic process and random recursion arising in probabilistic modelling. Adv. in Appl. Probab., 27(1):185–202, 1995.

[29] W.Vervaat. On a stochastic difference equation and a representation of non-negative infinitely divisible random variables. Adv. Appl. Prob. 11 (1979), 750–783.

D. Buraczewski, E. Damek, and J. Zienkiewicz, Instytut Matematyczny, Uniwersytet Wroclawski, 50- 384 Wroclaw, pl. Grunwaldzki 2/4, Poland

E-mail address: dbura@math.uni.wroc.pl, edamek@math.uni.wroc.pl, zenek@math.uni.wroc.pl

Cytaty

Powiązane dokumenty

The application of the formal series approach to the Poisson production function leads to (a) a modular-like functional equation for the moment generating function of W and (b)

Przemówienie wygłoszone podczas

Inten- cja tej strategii jest dość oczywista, idzie bowiem o legitymizację obiektu badań jako już rozpoznanego albo też zapoznanego (co tylko potwierdza powyższą zasadę poprzez

w dzie dzi nie eko no mii. Ka pi tał ludz ki nie jest war to ścią sta łą.. Je śli ktoś so bie nie ra dzi na da nym sta no wi sku, prze su wa ny jest na in ne, gdzie jest w sta

Alsmeyer, On the stationary tail index of iterated random Lipschitz functions, Stochastic Process.. Crauel, Iterated function systems and multiplicative ergodic theory, in:

The random sets theory makes it possible to handle uncertainties connected with the selection of input parameters in a very specific way.. The first part of the article explains

Proof. We choose a, B, 50, p according to the assumptions of the theorem.. On the Stability of Solutions of Differential Equations... Let sets Slz and £2° be defindet similarly

In the following by N we shall denote a positive integer-valued random variable which has the distribution function dependent on a parameter 2(2 &gt; 0) i.e.. We assume that