D. BURACZEWSKI, E. DAMEK, AND J. ZIENKIEWICZ

Abstract. We consider the stochastic diﬀerence equation on R
*X**n**= A**n**X**n**−1**+ B**n**,* *n**≥ 1,*

*where (A**n**, B**n*)*∈ R × R is an i.i.d. sequence of random variables and X*0is an initial distribution.

*Under mild contractivity hypotheses the sequence X**n**converges in law to a random variable S,*
*which is the unique solution of the random diﬀerence equation S =*_{d}*AS + B. We prove that*
under the Kesten-Goldie conditions

*n→∞*lim
*E|X**n**|*^{α}

*n* *= αm**α**C*_{∞}*,*
*where C** _{∞}*is the Kesten-Goldie constant

*C** _{∞}*= lim

*t**→∞**t** ^{α}*P[

*|S| > t*]
*,*

*α is the Cramer coeﬃcient of log**|A*1*|, m**α* =*E[|A*1*|** ^{α}*log

*|A*1

*|] and E|X*0

*|*

^{α}*<*

*∞. Thus, on one*

*side we describe behavior of the αth moments of the process*

*{X*

*n*

*}, and on the other we obtain*

*an alternative formula for C*

*. The results are further extended to a class of Lipschitz iterated systems and to a multidimensional setting.*

_{∞}1. Introduction

**1.1. The random diﬀerence equation. We consider the stochastic diﬀerence equation on**R

(1.1) *X**n**= A**n**X**n**−1**+ B**n**, n≥ 1*

*where (A**n**, B**n*)*∈ R×R is a sequence of i.i.d. (independent identically distributed) random variables*
*and X*0*∈ R is an initial distribution. The generic element of the sequence (A**n**, B**n*) will be denoted
*by (A, B). Under mild contractivity hypotheses the sequence X**n* converges in law to a random
*variable S, which is the unique solution of the random diﬀerence equation*

(1.2) *S =**d**AS + B,* *S independent of (A, B);*

*see [18, 29]. Moreover, the solution S can be explicitly written as*

(1.3) *S =*

∑*∞*
*n=0*

*A*1*. . . A**n**B**n+1**.*

There is a considerable interest in study various aspects of the iteration (1.1) and, in particular,
*the tail behaviour of S. The story started with the seminal paper of Kesten [24] who formulated*
*reasonable conditions for S to have a heavy tail in the multidimensional case of (1.1) when A** _{n}* are

*matrices with positive entries and B*

_{n}*are vectors. For d = 1 this means existence of the limit*

(1.4) lim

*t**→∞**t*^{α}*P[|S| > t] = C*_{∞}*.*

*Kesten’s proof being very general is quite technical and so for d = 1 a speciﬁc approach was needed.*

Some work was done by Grincevicius [20], but for the complete picture we owe to Goldie [19].

The authors were partially supported by NCN grant UMO-2011/01/M/ST1/04604. We thank the reviewer for their constructive comments, which helped us to improve the manuscript.

1

**Theorem 1.5. [19] Assume that the following Kesten-Goldie conditions are satisfied**

*• the law of log |A| conditioned on A ̸= 0 is non-arithmetic;*

*• there is α > 0 such that E|A|*^{α}*= 1,E[|A|** ^{α}*log

^{+}

*|A|] < ∞, E|B|*

^{α}*<∞;*

*• P[Ax + B = x] < 1 for every x ∈ R.*

*IfP[A ≥ 0] = 1, then*

*C*+= lim

*t**→∞**t*^{α}*P{S > t} =* 1
*αm**α*E[

*((AS + B)*^{+})^{α}*− (AS*^{+})* ^{α}*]

*,*

*C*

*= lim*

_{−}*t**→∞**t*^{α}*P{S < −t} =* 1
*αm**α*E[

*((AS + B)** ^{−}*)

^{α}*− (AS*

*)*

^{−}*] (1.6)*

^{α}*for m**α*=*E[|A|** ^{α}*log

*|A|] < ∞.*

*IfP[A < 0] > 0, then*

*C*+*= C** _{−}* = 1

*2αm*

*α*E[

*|AS + B|*^{α}*− |AS|** ^{α}*]

*.*

*Moreover, C*

_{∞}*= C*+

*+ C*

_{−}*> 0.*

In the present paper we prove that

(1.7) *C** _{∞}*= lim

*n**→∞*

1

*αm**α**nE|X**n**|*^{α}*,*

*provided X*0is such that*E|X*0*|*^{α}*<∞, see Theorem 2.5. In particular, the sequence of αth moments*
*of X**n* *increases linearly while for β < α, βth moments of X**n* *are uniformly bounded and for β > α*
*they grow exponentially. We also get analogous expressions for the constants C*+ *and C** _{−}*. Notice

*that formulae (1.6) cannot be used to determine the value of C*

_{∞}*, since it depends on S, which*

*in general is unknown. In contrast, formulae (1.7) allows to approximate C*

*as a limit of an expression depending on the input sequence*

_{∞}*{(A*

*n*

*, B*

*n*)

*}*

*n*

*∈N*. Below, in Section 1.4, we presents

*some other expressions for C*

*.*

_{∞}*In the case when A* *≥ 0, B = 1 and X*0 = 0, (1.6) can be found in Bartkowiak, Jakubowski,
Mikosch, Winterberger [5]. Their simple argument generalizes far beyond the setting of [5] and the
aim of the present paper is to shed light on that.

If*P[A < 0] > 0, then*

*C*_{+}*= C** _{−}* = lim

*n**→∞*

1

*2αm**α**nE|X**n**|*^{α}*> 0*

*In this case the support of ν isR (see e.g [9]). If P[A ≥ 0] = 1 then an analogous argument gives*
*C*_{+}= lim

*n**→∞*

1

*αm**α**nE((X**n*)^{+})^{α}*,*
*C** _{−}*= lim

*n**→∞*

1

*αm**α**nE((X**n*)* ^{−}*)

^{α}*.*(1.8)

Neither of formulae (1.6), (1.8) guarantees positivity of the limiting constant so it must be comple-
*mented by an additional argument. There is such an argument and it is related to the support of ν.*

If*P[A ≥ 0] = 1 then suppν is either R or a half line (see e.g [9], Theorem 2.5.5 for a simple proof).*

*If we assume additionally that suppν =R and P[A > 0] = 1 then both constants C*+*, C** _{−}*are strictly
positive [22]; see also an alternative proof in [9], Theorem 2.4.7. Moreover, Proposition 2.5.4 in [9]

*gives a simple criterion for a half line to be contained in suppν i.e. the corresponding tail constant*
*to be strictly positive. Suppose that there are (a*1*, b*1*), (a*2*, b*2)*∈ suppµ such that a*1*< 1 < a*2 and

(1.9) *b*2

1*− a*2

*<* *b*1

1*− a*1

*then [c,∞) ⊂ suppν, for some c ∈ R, and C*+*> 0. If*

(1.10) *b*2

1*− a*2

*>* *b*1

1*− a*1

then (*−∞, c] ⊂ suppν, for some c ∈ R, and C**−**> 0.*

**1.2. Lipschitz recursions. Formula (1.7) remains valid for one dimensional iterated function sys-**
tems i.e. recursions of the type

*X**n**= ψ**n**(X**n**−1**),* *n = 1, 2, ...,* *X*0*= x,*

*where ψ are random Lipschitz functions. Such systems for certain aﬃne like functions where already*
studied by Goldie [19] who adopted to them the approach working for (1.1). To generalize (1.7) we
assume that IFS is close to the aﬃne recursion in the following sense

*Ax− B ≤ ψ(x) ≤ Ax + B*
*for some A, B satisfying Kesten-Goldie conditions; (see Theorem 2.5).*

Beginning from the early nineties iterated function systems of iid Lipschitz maps (IFS) on a complete metric space have attracted a lot of attention: Alsmeyer [1], Arnold and Crauel [3], Broﬀerio and Buraczewski [6], Diaconis and Friedman [14], Duﬂo [15], Elton [16], Henion and Herv´e [23], Mirek [27] and they still do. In particular, it seems that modeling them after (1.1) has been very fruitful, see Alsmeyer [1] and Mirek [27]. Following this path we prove (1.7) and (1.8) in the framework of such IFS. The details will be given in Section 2.

**1.3. Multidimensional case. Our techniques can be applied to (1.1) with multidimensional sim-**
*ilarities A** _{n}* instead of numbers [8] and, more general, to IFS on R

*modeled on similarities [27].*

^{d}*This covers the case when A*_{n}*, B** _{n}*are complex valued i.e. (1.1) is related to physical models via the
complex valued smoothing transform [11, 26]. Then, as [8, 27] show the heavy tail behavior can be

*observed “in directions”. Then the role of C*

_{+}

*, C*

_{−}*is played by a measure σ*

*on the unit sphere S*

_{µ}

^{d}

^{−1}*i.e. for a suitable W*

*⊂ S*

^{d}*, we have*

^{−1}*t*^{α}*P{|S| > t, S/|S| ∈ W } → σ**µ**(W )*
*when t→ ∞. Here the above is complemented by*

(1.11) *σ**µ**(W ) = lim*

*n**→∞*

1
*αm**α**n*E[

*|X**n**|*^{α}**1***W*

]*.*

*The process X*_{n}*and the tail behavior of S in the multidimensional case of (1.1) was studied by*
*various authors in various contexts and regular variation of S has been proved: [2, 10, 21, 22, 25],*
but beyond similarities nothing like (1.11) has been obtained yet. A more detailed discussion of
that is postponed to Section 4.

**1.4. Previous results. The Kesten-Goldie theorem found enormous number of applications both in**
pure and applied mathematics, see [4, 9, 14, 28] and the comprehensive bibliography there. Therefore
*descriptions of the limiting constant C** _{∞}* are vital. We refer to [12, 17] for recent results. None of

*them, however, treats the case of general signed A and B or any aspect of the multidimensional*case. The theory behind [12, 17] is quite advanced and the proofs by no means simple. Comparing to those papers, formula (1.7) is of a diﬀerent nature, with weaker assumptions and an elementary proof.

*Let us start with [17]. The assumption there is: A*_{k}*, B*_{k}*, k∈ Z are positive and A**k* independent
*of B** _{k}*. Then

*C*_{∞}*= C*^{∗}*C*_{ESZ}*,*

*where C** ^{∗}*is “so-called ” Cramer-Lundberg constant

^{1}and

*C*

*= ˜E[ ∑*

_{ESZ}*k<0*

*A*_{0}*· · · A**k+1**B*_{k}*+ B*_{0}+∑

*k>0*

*A*^{−1}_{1} *· · · A*^{−1}*k* *B** _{k}*
]

*α*

*.*

*For k < 0 the expectation is taken under eP(·|A*0*· · · A**k+1**≤ 1, ∀k < 0), eP being the product measure*
*µ*^{N}*while for k≥ 0 the expectation is taken under P**α*(*·|A*1*· · · A**k**> 1,∀k > 0), P**α*being the product
measure ˜*µ*^{N}* _{α}*where ˜

*µ*

*α*

*(U ) =E[A*

^{α}**1**

*U*

*(A)].*

*Another expression for C*_{∞}*can be found in [12]. Under assumption that A**k**, B**k**≥ 0 we have*

(1.12) *C** _{∞}*= 1

*αm**α**Eτ*E*α*

[*Z*^{α}**1**_{τ =}* _{∞}*]

*,*

*where τ is a regeneration time for the Markov chain*

*X*_{n+1}*= A*_{n+1}*X*_{n}*+ Q*_{n+1}*,*
*Z =*

∑*∞*
*n=1*

*A*^{−1}_{1} *· · · A*^{−1}*n* *B**n**+ X*0*,*

the expectationE*α*is taken with respect to the product measure ˜*µ*^{N}_{α}*and X*_{0}is distributed according
to a minorisation measure needed for the regeneration scheme to work. In fact, the authors deal
with the Letac model

*X**n**= A**n**max(D**n**, X**n**−1**) + B**n**.*

*and they obtain for it a formula for C*_{∞}*= C*+ in the spirit of (1.12). (1.12) is a corollary from their
*main result speciﬁed to D*_{n}*= 0 and B*_{n}*≥ 0 i.e. when the Letac model becomes the aﬃne recursion.*

(1.12) has been already used to simulate the Kesten-Goldie constant [13].

2. The main theorem

**2.1. Iterated Lipschitz maps. Let (***X, d) be a complete separable metric space with Borel - σ*
ﬁeld *B(X) and unbounded metric d. A temporally homogeneous Markov chain (X**n*)_{n}* _{≥0}* with the
state space

*X is called iterated function system of iid Lipschitz maps (IFS)*

^{2}, if it satisﬁes a recursion of the form

(2.1) *X*_{n}*= ψ(θ*_{n}*, X*_{n}_{−1}*),* *for n≥ 1,*

where

*• X*0*, θ*1*, θ*2*, ... are independent random elements on a common probability space Ω*

*• θ*1*, θ*2*, ... are identically distributed and taking values in a measurable space (Θ,A)*

*• ψ : (Θ × X, A ⊗ B(X)) 7→ (X, B(X)) is jointly measurable and Lipschitz continuous in the*
second argument i.e.

*d(ψ(θ, x), ψ(θ, y))≤ C**θ**d(x, y),*
*for all x, y∈ X, θ ∈ Θ and a suitable C**θ**∈ R*^{+}.

We have then,

*X**n**= ψ**n**◦ . . . ◦ ψ*1*(X*0*) =: ψ**n,1**(X*0*),* *where ψ**i**(x) = ψ(θ**i**, x).*

*We will also write ψ(θ, x) = ψ(x) for short. Let L(ψ), L(ψ**n,1**) be the Lipschitz constants of ψ, ψ**n,1*

respectively.

The following theorem by Elton [16] gives suﬃcient conditions for existence of the stationary
*distribution for the Markov chain (X** _{n}*)

_{n}*.*

_{≥1}1That is*P[M > t] ∼ C*^{∗}*t*^{−α}*, where M = sup*_{n}*A*1*. . . A**n*
2We will also use the abbreviation: Lipschitz iterated system

* Theorem 2.2. Suppose that*E log

^{+}

*L(ψ) <∞, E log*

^{+}

*d(x*0

*, ψ(θ, x*0

*)) <∞ for some x*0

*and*

*n*lim*→∞*

1

*nlog L(ψ**n,1**) < 0* *a.s.*^{3}
*Then*

*• X**n* *converges in law to a random variable S with law ν.*

*• ν is the unique stationary distribution of (X**n*)_{n}_{≥0}

*• the equation S = ψ(S) holds true in law.*

*In this context a natural question arises: under which conditions S has a heavy tail behavior. Lip-*
schitz iterative systems have been recently considered by Alsmeyer [1] and some suﬃcient conditions
are provided there. In a considerable generality they allow to obtain

*t*lim*→∞*

log*P[d(x*0*, X) > t]*

*log t* =*−α.*

*Under further speciﬁc hypotheses some more was proved in [1] (see the example with AR(1) in*
section 3). However, conditions of Alsmeyer are to weak to imply Theorem 2.5 below. Therefore
here we work within a more restrictive setting than [1]. Our standing assumption is:

*X = R and there is a random variable (A, B) ∈ R*^{2} *satisfying Kesten-Goldie conditions such that*

(2.3) *Ax− B ≤ ψ(x) ≤ Ax + B, x ∈ suppν.*

*Condition (2.3) has a very natural geometrical interpretation. It means that the graph of ψ lies*
*between the graphs of Ax− B and Ax + B for every x ∈ suppν. This allows us to think that the*
recursion is close to the aﬃne recursion.

*To get the idea what is the meaning of (2.3) the reader may think of the recursion ψ(θ, x) =*
max*{Ax, B}, where θ = (A, B) ∈ R*^{+}*× R = Θ (see Section 3). Notice that if X*0*= x≥ 0 then all*
*the iterations stay positive which implies that suppν⊂ [0, ∞). We have then*

0*≤ max(Ax, B) − Ax ≤ B*^{+}*,* *x≥ 0.*

Notice that for the max recursion (2.3) is not satisﬁed on*R, but only on [0, ∞) ⊇ suppν. Assumption*
*(2.3) has an important consequence. It gives formulae for the constant C*_{∞}*, C*+*, C** _{−}* analogous to
those by Goldie. It was ﬁrst observed by Mirek who proved the following theorem.

**Theorem 2.4. ([27]) Assume that ψ satisfies (2.3) and those of Theorems 1.5 and 2.2. Then**

*t*lim*→∞**t*^{α}*P[|S| > t] =* 1

*αm*_{α}*E [|ψ(S)|*^{α}*− |AS|*^{α}*] = C*_{∞}*.*
*Moreover, if A≥ 0 a.s., then*

*t*lim*→∞**t*^{α}*P[S > t] =* 1
*αm** _{α}*E[

*(ψ(S)*^{+})^{α}*− ((AS)*^{+})* ^{α}*]

*= C*_{+}*,*

*t*lim*→∞**t*^{α}*P[S < −t] =* 1
*αm**α*E[

*(ψ(S)** ^{−}*)

^{α}*− ((AS)*

*)*

^{−}*]*

^{α}*= C*_{−}*.*
*IfP[A < 0] > 0 then*

*t*lim*→∞**t*^{α}*P[S > t] = lim*

*t**→∞**t*^{α}*P[S < −t] =* 1

*2αm*_{α}*E [|ψ(S)|*^{α}*− |AS|*^{α}*] = C*+*= C*_{−}*,*

**Remark 2.1. Theorem 2.4 is formulated in [27] under assumption that***P[M = 0] = 0, but the*
proof doesn’t make use of that.

3The convergence follows from the subadditive ergodic theorem

**2.2. Main result. The main theorem of this paper is**

**Theorem 2.5. Suppose that**E log L(ψ) < 0, (2.3) and the assumptions of Theorem 1.5 are satisfied*as well asE|X*0*|*^{α}*<∞. Then*

lim

*n**→∞*

1

*αm**α**nE|X**n**|*^{α}*= C*_{∞}*.*
*If A≥ 0 then*

*n*lim*→∞*

1

*αm**α**nE((X**n*)^{+})^{α}*= C*+
*n*lim*→∞*

1

*αm*_{α}*nE((X**n*)* ^{−}*)

^{α}*= C*

_{−}*.*

*IfP[A < 0] > 0, then*

*n*lim*→∞*

1

*2αm*_{α}*nE|X**n**|*^{α}*= C*+*= C*_{−}*.*
*Proof. Let us consider the backward process*

*R*^{x}_{n}*= ψ*_{1}*◦ · · · ◦ ψ**n**(x),* *x∈ supp ν.*

*Notice that (2.3) implies that for every x∈ suppν*

*|R*^{x}*n**| ≤*

∑*n*
*k=1*

*|A*1*| · · · |A**k**−1**||B**k**| + |A*1*| · · · |A**n**||x|*

(2.6)

*To prove (2.6) we proceed by induction. For n = 1, (2.6) follows directly from (2.3). If x∈ suppν,*
*then ψ*_{n+1}*(x)∈ suppν and so by induction*

*|R*^{x}*n+1**| ≤*

∑*n*
*k=1*

*|A*1*| · · · |A**k**−1**||B**k**| + |A*1*| · · · |A**n**||ψ**n+1**(x)|*

*≤*

∑*n*
*k=1*

*|A*1*| · · · |A**k**−1**||B**k**| + |A*1*| · · · |A**n**|(|A**n+1**||x| + |B|**n+1*)

which proves the claim. Let

*R =*˜

∑*∞*
*k=1*

*|A*1*| · · · |A**k**−1**||B**k**|.*

Since*E log |A| < 0, we have*

*E|R*^{x}*n**|*^{β}*≤ E( ˜R)*^{β}*<∞*
*for every n and β < α.*

*Then R*^{x}* _{n}*=

*d*

*X*

_{n}

^{x}*∈ supp ν in law and R*

^{x}*n*

*converges a.s. to S [14]. Let*

*b*

*n*=

*E|R*

^{x}*n*

*|*

^{α}*.*

Notice that

*b** _{n+1}*=

*E|ψ*1

*(R*

^{x}

_{n}*◦ δ)|*

^{α}*,*

*where δ is the shift operator δ(ω*_{1}*, ω*_{2}*, . . . ) = (ω*_{2}*, . . . ). If A is independent of R*_{n}* ^{x}*, since

*EA*

*= 1, we have*

^{α}*b** _{n}*=

*E|AR*

^{x}*n*

*|*

^{α}*.*By an elementary calculus lemma, it is enough to prove that

*n*lim*→∞**(b**n+1**− b**n*) =*E [|ψ(S)|*^{α}*− |AS|*^{α}*] .*

We have

*b*_{n+1}*− b**n*=E[

*|ψ*1*(R*^{x}_{n}*◦ δ)|*^{α}*− |A*1*(R*^{x}_{n}*◦ δ)|** ^{α}*]

*→ E*[

*|ψ*1*(S)|*^{α}*− |A*1*S|** ^{α}*]
provided we can dominate

*|ψ*1*(R*^{x}_{n}*◦ δ)|*^{α}*− |A*1*(R*^{x}_{n}*◦ δ)|*^{α}*by an integrable function. For α≤ 1 we write*

*|ψ*1*(R*^{x}_{n}*◦ δ)|*^{α}*− |A*1*(R*^{x}_{n}*◦ δ)|*^{α}* ≤ |ψ*1*(R*^{x}_{n}*◦ δ) − A*1*(R*^{x}_{n}*◦ δ)|*^{α}*≤ |B*1*|*^{α}*,*

*which is integrable. Notice that if x* *∈ supp ν then R*^{x}*n**◦ δ ∈ supp ν and so we can use (2.3). If*
*α > 1 we use the inequality|a*^{α}*− b*^{α}*| ≤ α max(a*^{α}^{−1}*, b*^{α}* ^{−1}*)

*|a − b| and we have*

*|ψ*_{1}*(R*^{x}_{n}*◦ δ)|*^{α}*− |A*1*(R*^{x}_{n}*◦ δ)|*^{α}* ≤* *α max(|ψ*1*(R*^{x}_{n}*◦ δ)|*^{α}^{−1}*,|A*1*(R*^{x}_{n}*◦ δ)|*^{α}* ^{−1}*)

*×|ψ*1*(R*_{n}^{x}*◦ δ) − A*1*(R*^{x}_{n}*◦ δ)|*

*≤ C*(

*|A*1*|*^{α}^{−1}*|(R*^{x}*n**◦ δ)|*^{α}* ^{−1}*+

*|B*1

*|*

^{α}*)*

^{−1}*|B*1*|*

*≤ C*(

*|A*1*|*^{α}* ^{−1}*( ˜

*R◦ δ)*

^{α}*+*

^{−1}*|B*1

*|*

^{α}*)*

^{−1}*|B*1*|.*

The latter variable is integrable:

E[

*|A*1*|*^{α}^{−1}*|B*1*|( ˜R◦ δ)*^{α}* ^{−1}*+

*|B*1

*|*

*]*

^{α}=E[

*|A*1*|*^{α}^{−1}*|B*1*|*]
E[

( ˜*R◦ δ)*^{α}* ^{−1}*)

+*E|B*1*|*^{α}*,*
which is ﬁnite, because*E(|A*1*|*^{α}^{−1}*|B*1*|) ≤* (

*E(|A*1*|*^{α}* ^{−1}*)

*)*

^{p}*1/p*(

*E|B*1*|** ^{q}*)

*1/q*

*with p =* _{α}^{α}_{−1}*, q = α. To*
*conclude the result we deﬁne b**n*=*E((R*^{x}*n*)^{+})* ^{α}*=E[

*A*^{α}*((R*^{x}* _{n}*)

^{+})

*]*

^{α}and we repeat the above argument
making use of the simple inequality*| max(a, 0) − max(b, 0)| ≤ |a − b|.*

3. Positivity of the constants and examples

In this section we describe a few examples to which both Theorems 2.4 or 2.5 can be applied.

*Neither of them guarantees strict positivity of C*_{∞}*or C*+*, C** _{−}*. This requires some further arguments;

see [7] for recent results.

**3.1. The Letac Model. For A**≥ 0, consider

*ψ(x) = A max(D, x) + B*

*with A* *∈ R*^{+}*, B, D* *∈ R. This model was already considered by Goldie [19]. Notice that the*
*extremal recursion ψ(x) = max(Ax, B) is a particular case of it.*

*This is a Lipschitz recursion with the Lipschitz constant A. Assume that the Kesten-Goldie*
conditions are satisﬁed and additionallyE[

*A*^{α}*|D|** ^{α}*]

*<∞. Then Mirek’s scheme can be applied here*
*if suppν⊂ [−c*0*,∞), c*0*≥ 0. Indeed, for x ≥ −c*0

*ψ(x)− Ax = AD− x + |D − x|*

2 *+ B = A(D− x)*^{+}*+ B*
and a simple calculation shows that

(3.1) *−AD*^{−}*+ B≤ ψ(x) − Ax ≤ AD*^{+}*+ Ac*0*+ B*
Therefore, by Theorems 2.4 and 2.5 we may conclude that

lim

*t**→∞**P[S > t]t*^{α}*= C*_{+}= lim

*n**→∞*

1
*αρn*E[

*((X*_{n}* ^{x}*)

^{+})

*]*

^{α}*.*and

*n*lim*→∞*

1
*αm*_{α}*n*E[

*((X*_{n}* ^{x}*)

*)*

^{−}*]*

^{α}*= 0.*

*(3.1) can be used to get positivity of C*+ *provided the same holds for the recursion with ψ(x) =*
*Ax− AD*^{−}*+ B. A suﬃcient and necessary condition for positivity of C*+ is given in [7]. See also
*[12] and [19] for suﬃcient conditions for positivity of C*+.

**3.2. Lipschitz recursions modeled on the Letac model. For A***≥ 0, consider a Lipschitz*
*transformation ψ that satisﬁes*

*A max(D*1*, x) + B*1*≤ ψ(x) ≤ A max(D*2*, x) + B*2

*with A∈ R*^{+}*, B*1*, B*2*, D*1*, D*2*∈ R and suppose that*

*E log Lip ψ < 0*
E log^{+}*(A(1 +|D**i**|) + |B**i**|) < ∞*
*for i = 1, 2. Then assumptions of Theorem 2.2 are satisﬁed and*

*X*_{n+1}^{x}*= ψ(θ*_{n+1}*, X*_{n}* ^{x}*)

has a single stationary solution that does not depend on the starting point. Assume additionally
*that suppν⊂ [−c*0*,∞), c*0*≥ 0. Proceeding as before, we obtain that for x > 0*

*−AD*1^{−}*+ B*1*≤ ψ(x) − Ax ≤ AD*^{+}2 *+ B*2*+ Ac*0

Suppose now the Kesten-Goldie conditions andE[

*A*^{α}*(D*_{1}^{−}*+ D*_{2}^{+})^{α}*+ (B*_{1}^{−}*+ B*_{2}^{+})* ^{α}*]

*<∞. Then the*
assumptions of Theorem 2.5 are satisﬁed and we may conclude that

*t*lim*→∞**P[|S| > t]t** ^{α}*= lim

*t**→∞**P[S > t]t*^{α}*= C*+= lim

*n**→∞*

1
*αm*_{α}*n*E[

*((X*_{n}* ^{x}*)

^{+})

*]*

^{α}*.*and

*t*lim*→∞*

1

*αm*_{α}*nE((X**n** ^{x}*)

*)*

^{−}

^{α}*= 0.*

*In view of [7] positivity of C*+ is equivalent to unboundedness of the support of the stationary
*solution. Again see [12] for some suﬃcient conditions for positivity of C*_{+}.

* 3.3. The AR(1)-model with ARCH(1) errors. This is a nonlinear model introduced by Engle*
and Weiss. It has received attention due to its relevance in Mathematical Finance where it is known
as a relatively simple model that captures temporal variation of volatility in ﬁnancial data sets. It
is deﬁned by the recursion

*X**n**= αX**n**−1**+ (β + λX*_{n}^{2}* _{−1}*)

^{1/2}*ε*

*n*

*, n≥ 1*

*with (α, β, λ)* *∈ R × R*^{+}*× R*^{+} *and ε**n* called innovations are assumed to be i.i.d symmetric and
*independent of the initial distribution X*0. (See [1] for a nice description).

*If X*0 *is symmetric then for every n, X**n* *is symmetric and so is the stationary distribution S*
*provided the assumptions of Theorem 2.2 are satisﬁed and S does exist. It is so, if* *E log(|α| +*
*λ*^{1/2}*|ε|) < 0. Then*

(3.2) *S = αS + (β + λS*^{2})^{1/2}*ε* *in law.*

*We can easily assume that α > 0 because S satisﬁes (3.2) with−α too. |S| is independent of sgnS*
*so putting η = εsgnS we get a variable independent of|S| and so we may consider W = |S|*^{2} that
satisﬁes the equation

*W = ψ(W )(α + λ*^{1/2}*η)*^{2}*W +* *2αβηW*^{1/2}

*(β + λW )*^{1/2}*+ λW )*^{1/2}*+ βη*^{2} *in law.*

Using this equation Alsmeyer proved that

(3.3) lim

*t**→∞**t*^{2κ}*P(S > t) =* 1
*2κm** _{κ}*E(

*ψ(W )*^{κ}*− (AW )** ^{κ}*)

*> 0*

*where m**κ*=*EA*^{κ}*log A, A = (α + λ*^{1/2}*η)*^{2} *with κ playing the role of the Cramer exponent.*

Notice that

0*≤ ψ(W ) ≤ (α + λ*^{1/2}*η)*^{2}*W + β(2αλ*^{−1/2}*|η| + η*^{2}*).*

Therefore, if

*W**n**= ψ**n**(W**n**−1**),* *W*0= 0
then by Theorem 2.5

*t*lim*→∞**t*^{2κ}*P[R > t] =* 1
2 lim

*t**→∞**t*^{κ}*P[W > t] = lim*

*n**→∞*

1

*2κm**κ**nEW**n*^{κ}

4. Similarities

In this section we consider a multidimensional version of (1.1) i.e.

*X*_{n}^{x}*= ψ*_{n}*(X*_{n}^{x}_{−1}*) = A*_{n}*X*_{n}^{x}_{−1}*+ B*_{n}*∈ R*^{d}*,*

*assuming that the matrices A*_{n}*are similarities, A*_{n}*∈ R*^{+}*× O(R*^{d}*), B*_{n}*∈ R*^{d}*and X*_{0}^{x}*= x∈ R** ^{d}*.

*An element g∈ GL(R*

*) is a similarity in the sense of Euclidean geometry, if*

^{d}*|gx| = |g||x|, x ∈ R*^{d}*,*

*where the norm of a linear transformation g of* R* ^{d}* is denoted

*|g|. If g is a similarity, then*

_{|g|}^{1}

*g*preserves the norm on R

*. Hence the subgroup G*

^{d}*⊆ GL(R*

*) of all the similarities is isomorphic to the direct product of the multiplicative group of real positive numbersR*

^{d}^{+}and the orthogonal

*group O(*R

*). We will write G =R*

^{d}^{+}

*× O(R*

*).*

^{d}Let H =R* ^{d}*o G be the group of transformations

R^{d}*∋ x → hx = gx + q ∈ R*^{d}*,*

*where h = (q, g) with g∈ G, q ∈ R*^{d}*. Then (B**n**, A**n*) is an H valued i.i.d. sequence with distribution
*µ. (Here B*_{n}*∈ R*^{d}*and A*_{n}*∈ G.)*

If*E log |A| < 0 and E log*^{+}*|B| < ∞, then the assumptions of Theorem 2.2 are satisﬁed and the*
*sequence X**n* *converges in law to a random variable S given by (1.3), which is the unique solution*
of the random diﬀerence equation (1.2). The main result of [8] shows that under appropriate
*assumptions the random variable S is regularly varying. The law of S will be denoted by ν.*

*We are going to consider a little bit more general situation than in [8]. Namely we allow A to*
take values in G*∪ {0}. Then (B, A) ∈ H ∪ {0} and the latter will be our standing assumption. This*
*setting has not been considered in [8] but the basic result of [8] describing the tail of S (Theorem*
4.1 below) holds true.

Let ¯*µ be the law of A, let G*_{µ}_{¯} be the closed group generated by the support of ¯*µ restricted to G*
(we avoid a possible non zero mass of ¯*µ at A = 0) and let K**µ*¯ = G*µ*¯*∩ O(R** ^{d}*). Then G

*µ*¯=R

^{+}

*× K*

*µ*¯. The only place in [8], where the group structure interferes is a renewal theorem on G

*µ*¯ applied to the probability measure

*|A|*

^{α}*µ and a possible positive mass of ¯*¯

*µ at zero doesn’t play any role.*

Set

*ν*_{g}*(f ) =|g|*^{−α}*(gν)(f ) =|g|*^{−α}

∫

R^{d}

*f (gx)ν(dx) =|g|*^{−α}*Ef(gS),*

*where S is the solution to (1.2). For x∈ R*^{d}*, x̸= 0 by x the projection of x onto the unit sphere*
S^{d}^{−1}*, i.e. x = x/|x|.*

*The following result describes the tail of ν, [8].*

* Theorem 4.1. Assume that the action of suppµ on*R

^{d}*has no fixed point,*

*• E[log |A|] < 0,*

*• there is α > 0 such that E|A|*^{α}*= 1,*

*• E[|A|** ^{α}*log

^{+}

*|A|] and E|B|*

^{α}*are both finite,*

*• the law of |A| conditioned on R*^{+} *is not arithmetic.*

*Then there is a Radon measure Λ on* R^{d}*\ {0} such that for every bounded continuous function F*
*that vanish in a neighborhood of zero*

(4.2) lim

*|g|→0,g∈ ¯**G**µ*

*⟨F, ν**g**⟩ = ⟨F, Λ⟩ =* 1
*αm**α*

∫

*G*_{µ}_{¯}

*|g|** ^{−α}*E[

*F (gS)− F (gAS)*]
*dλ(g).*

*where λ =* ^{dr}_{r}*× dk is the Haar measure on G**µ*¯ *such that*∫

K*µ*¯*dk = 1. Moreover, there is a finite K*_{µ}_{¯}
*- invariant measure σ**µ* *on*S^{d}^{−1}*such that, in radial coordinates Λ = σ**µ**⊗*_{r}^{αdr}*α+1**, i.e.*

(4.3) *⟨F, Λ⟩ =*

∫

R^{+}*×S*^{d−1}

*F (rω)* *α*

*r*^{α+1}*dr σ*_{µ}*(dω).*

*Finally, (4.2) holds for every function F such that 0 /∈ suppF , the measure Λ of the set of*
*discontinuities of F is 0 and for some ε > 0*

sup

*x**̸=0*

(*|x|*^{−α}*| log |x||*^{1+ε}*|F (x)|*)

*<∞.*

*In the multidimensional case the measure σ**µ* is a straightforward analogue of the limiting con-
*stants C*_{−}*and C*+*. Indeed when d = 1, then σ**µ**= C*+*δ*1*+ C*_{−}*δ*_{−1}*. Then it is natural to describe σ**µ*

more carefully, not only the total mass of it.

Our main result in the multidimensional case is the following

**Theorem 4.4. Suppose that the hypotheses of Theorem 4.1 are satisfied. Then for any continuous,***K**µ*¯ *invariant function f on*S^{d}^{−1}*we have*

(4.5) lim

*n**→∞*

1
*αm*_{α}*n· E*[

*|X**n*^{x}*|*^{α}*f (X*^{x}* _{n}*)]

=*⟨f, σ**µ**⟩ =*

∫

S^{d−1}

*f (ω)σ*_{µ}*(dω).*

*Proof. Let F be H¨older continuous with ξ < min(α, 1) and compactly supported in* R^{d}*\ {0}. We*
*ﬁx ε > 0 and consider the family of functions*

*χ** _{s,F}* =

*|g|*

*E[*

^{−s}*F (gS)− F (gAS)*]

*for α− ε ≤ s ≤ α. The functions χ**s,F* are all directly integrable (see [8]) and

*|χ**s,F**| ≤ |χ**α**−s,F**| + |χ**α,F**|.*

Therefore, lim

*s**→α*^{−}

∫

*G**µ*¯

*χ**s,F**(g) dλ(g) =*

∫

*G**µ*¯

*|g|** ^{−α}*(

*E(F (gS) − F (gAS))*)

*dλ(g) = αm**α**⟨F, Λ⟩.*

*Let now F (rω) = ϕ(r)f (ω), f (kω) = f (ω), ϕ, f H¨older continuous, suppϕ⊂ R*^{+}. Then

(4.6)

∫

*G*_{¯}_{µ}

*|g|** ^{−s}*(

*E(F (gS) − F (gAS))*)

*dλ(g) =*

∫

R^{+}

*r** ^{−s}*(

*E(F (rS) − F (r|A|S)))*

*dr*

*r*

*.*

*and for s < α*

∫

R^{+}

*r*^{−s}*EF (rS)dr*
*r* =

∫

R^{+}

*r** ^{−s}*E[

*ϕ(r|S|)f(S)]dr*
*r* =E

[
*f (S)*

∫

R^{+}

*r*^{−s}*ϕ(r|S|)dr*
*r*

]

=E [

*f (S)*

∫ _{∞}

*c**|S|*^{−1}

*r*^{−s}*ϕ(r|S|)dr*
*r*

]

=E [

*f (S)*

∫ _{∞}

*c*

*r*^{−s}*|S|*^{s}*ϕ(r)dr*
*r*

]

=E[

*|S|*^{s}*f (S)*] ∫

R^{+}

*r*^{−s}*ϕ(r)dr*
*r* =E[

*|ψ(S)|*^{s}*f (ψ(S))*
] ∫

R^{+}

*r*^{−s}*ϕ(r)dr*
*r* *,*

*where ψ(S) = AS + B with (A, B) independent of S. Analogously we proceed with the second term.*

Therefore, ∫

R^{+}

*χ**s,F**(r)dr*
*r* =

∫

R^{+}

*r*^{−s}*ϕ(r)dr*
*r* *· E*[

*|ψ(S)|*^{s}*f (ψ(S))− |AS|*^{s}*f (AS)*
]

and so (4.7)

∫

R^{+}

*χ**α,F**(r)dr*
*r* = lim

*s**→α*

∫

R^{+}

*χ**s,F**(r)dr*
*r* =

∫

R^{+}

*r*^{−α}*ϕ(r)dr*
*r* E[

*|ψ(S)|*^{α}*f (ψ(S))− |AS|*^{α}*f (AS)*
]

provided we can dominate

(4.8) *ψ(S)|*^{s}*f (ψ(S))− |S|*^{s}*f (S) = (|ψ(S)|*^{s}*− |AS|*^{s}*)f (ψ(S)) +|AS|** ^{s}*(

*f (ψ(S))− f(AS)*)
*by an integrable function independently of s. As in the proof of Theorem 2.5 we have*

*|ψ(S)|*^{s}*− |AS|** ^{s}*f (ψ(S))

*≤C|B|*

*if*

^{s}*s≤ 1*and

*|ψ(S)|*

^{s}*− |AS|*

^{s}*f (ψ(S)) ≤C*

(*|B|** ^{s}*+

*|A|*

^{s}

^{−1}*|B||S|*

^{s}*)*

^{−1}if *s > 1.*

*Therefore, the ﬁrst term in (4.8) is bounded uniformly by C(1 +|B|*^{α}*) or by C*
(

1 + *|B|** ^{α}* +

*|A|*^{α}^{−1}*|B||S|*^{α}* ^{−1}*)

which is integrable. For the second term in (4.8) we have

*|A|*^{α}*|S|*^{α}*f (ψ(S))− f(AS)) ≤C|A|*^{α}*|S|*^{α}*|ψ(S) − AS|*^{ξ}

*=C|A|*^{α}*|S|*^{α}*ψ(S)*

*|ψ(S)|−* *AS*

*|AS|*^{ξ}

*=C|A|*^{α}^{−ξ}*|S|*^{α}^{−ξ}*|ψ(S)|** ^{−ξ}*ψ(S)

*|AS| − |ψ(S)|AS*

^{ξ}*.*

*|ψ(S)|AS| − |ψ(S)|AS*^{ξ}*≤*ψ(S)*|AS| − ψ(S)|ψ(S)|** ^{ξ}*
+ψ(S)

*|ψ(S)| − |ψ(S)|AS*

^{ξ}*≤2|ψ(S)|*^{ξ}*ψ(S)− AS*^{ξ}

*≤2|ψ(S)|*^{ξ}*|B|*^{ξ}*.*
Hence

(4.9) *|A|*^{α}*|S|** ^{α}*f (ψ(S))

*− f(AS)) ≤2C|A|*

^{α}

^{−ξ}*|S|*

^{α}

^{−ξ}*|B|*

^{ξ}*<∞.*

*Since ν does not have atoms (see e.g. [9]),* *P[S = 0 or ψ(S) = 0] = 0 and so the above argument*
*holds true on a set of the full measure. Finally, from (4.3) and (4.7) we obtain that for δ-H¨*older
*functions f*

*⟨f, σ**µ**⟩ =* 1
*αm**α*E(

*|ψ(S)|*^{α}*f (ψ(S))− |AS|*^{α}*f (AS)*)
Let

*b** _{n}*=E[

*|X**n*^{x}*|*^{α}*f (X*^{x}* _{n}*)]

=E[

*|R*^{x}*n**|*^{α}*f (R*^{x}* _{n}*)]

*,*

*where R*

^{x}*=∑*

_{n}*n*

*k=1**A*_{1}*. . . A*_{k}_{−1}*B*_{k}*+ A*_{1}*· · · A**n**x, with the convention that if R** _{n}* = 0 then the function
under the expectation is 0. Then

*b** _{n}* =E[

*|A*1*(R*^{x}_{n}*◦ δ)|*^{α}*f (A*_{1}*(R*^{x}_{n}*◦ δ))*]
*,*
because*E|A*1*|*^{α}*= 1 and A*1*(R*^{x}_{n}*◦ δ) = R*^{x}*n**◦ δ. Therefore,*

(4.10) *b**n+1**− b**n*=E[

*|R*^{x}*n+1**|*^{α}*f (R*^{x}* _{n+1}*)

*− |A*1

*(R*

_{n}

^{x}*◦ δ)|*

^{α}*f (A*1

*(R*

^{x}

_{n}*◦ δ))*]

and

*b**n+1**− b**n**→* 1
*αm** _{α}*E[

*|ψ(S)|*^{α}*f (ψ(S))− |AS|*^{α}*f (AS)*]

*provided we can dominate the integrand in (4.10) by an L*^{1}function. We proceed as in the proof of
the previous proposition. We have

*|R*^{x}*n+1**|*^{α}*f (R*^{x}* _{n+1}*)

*− |A*1

*(R*

^{x}

_{n}*◦ δ)|*

^{α}*f (A*

_{1}

*(R*

^{x}

_{n}*◦ δ)) =*(

*|R*^{x}*n+1**|*^{α}*− |A*1*(R*^{x}_{n}*◦ δ)|** ^{α}*)

*f (R*^{x}* _{n+1}*)
+

*|A*1

*(R*

_{n}

^{x}*◦ δ)|*

*(*

^{α}*f (R*^{x}* _{n+1}*)

*− f(A*1

*(R*

^{x}

_{n}*◦ δ))*)

*.*But as in the proof of Theorem 2.5

(4.11) *|R*^{x}*n+1**|*^{α}*− |A*1*(R*_{n}^{x}*◦ δ)|*^{α}* ≤ |B*1*|*^{α}*,* *if α≤ 1*
or

(4.12) *|R*^{x}*n+1**|*^{α}*− |A*1*(R*^{x}_{n}*◦ δ)|*^{α}* ≤ α*(

*|B*1*|** ^{α}*+

*|A*1

*|*

^{α}

^{−1}*|B*1

*|( ˜R*

*)*

^{x}

^{α}*)*

^{−1}*,* *if α > 1*
which is integrable. Proceeding as above, we have

(4.13) *|A*^{1}*(R*^{x}_{n}*◦ δ)|** ^{α}*(

*f (R*^{x}* _{n+1}*)

*− f(A*1

*(R*

^{x}

_{n}*◦ δ))*)

*≤ C|A*

^{1}

*|*

^{α}

^{−ξ}*|B*1

*|*

*( e*

^{ξ}*R*

*)*

^{x}

^{α}

^{−ξ}on the set*{R**n+1**, A*1*(R**n**◦ δ) ̸= 0}. But since |R**n+1*^{x}*− A*1*R*^{x}_{n}*◦ δ| = |B*1*|, if one of the terms R**n+1*

*or A*1*R*^{x}_{n}*◦ δ is zero then the other is equal B*1.

Therefore, (4.5) holds for H¨older functions. It can be easily extended to continuous functions
*because for any f* *∈ C(S*^{d}^{−1}*) and η > 0 we can ﬁnd f**η* *that is ξ-H¨*older and *∥f**η**− f∥**L*^{∞}*< η. Then*
(4.14)

1
*αm**α**n· E*[

*|X**n*^{x}*|*^{α}*f ( eX*_{n}* ^{x}*)]

*−* 1

*αm**α**n· E*[

*|X**n*^{x}*|*^{α}*f**η*( e*X*_{n}* ^{x}*)]

*≤ η*1

*αm*

*α*

*n· E*[

*|X**n*^{x}*|** ^{α}*]

*≤ Cη*by (4.5) applied to the constant function 1 onS

^{d}

^{−1}*. Hence (4.5) follows for f*

*∈ C(S*

^{d}*).*

^{−1}Theorem 4.4 can be proved in more general settings of Lipschitz iterated systems modeled on similarities (see Mirek [27]), however we are not going to give any further details in this paper.

References

[1] G. Alsmeyer. On the stationary tail index of iterated random Lipschitz functions. *Preprint,*
*arxiv.org/abs/1409.2663.*

[2] G. Alsmeyer, S. Mentemeier. Tail behavior of sationary solutions of random diﬀerence equations: the case of
*regular matrices. Journal of Diﬀerence Equations and Applications 18(8):1305–1332, 2012.*

[3] L.Arnold and H.Crauel. Iterated function systems and multiplicative ergodic theory, in Diﬀusion Theory and
*Related Problems in Analysis II, M.Pinsky and Wihstatz, eds. Birkhauser, Boston, 1992, pp. 283–305*
*[4] M. Babillot, P. Bougerol, and L. Elie. The random diﬀerence equation X**n**= A**n**X**n**−1**+ B**n*in the critical

*case. Ann. Probab., 25(1):478–493, 1997.*

[5] K.Bartkiewicz, A.Jakubowski, T.Mikosch, O.Wintenberger. Stable limits for sums of dependent inﬁnite
*variance random varibles. Prob. Theory Relat. Fields, 150: 337–372, 2011*

*[6] S. Brofferio, D. Buraczewski. On unbounded invariant measures of stochastic dynamical systems, Ann.*

*Probab. 43:1456-1492, 2015.*

[7] D. Buraczewski, E. Damek. A simple proof for precise tail asymptotics of aﬃne type Lipschitz recursions.

*preprint.*

[8] D. Buraczewski, E. Damek, Y. Guivarch, A. Hulanicki and R. Urban. Tail-homogeneity of stationary
*measures for some multidimensional stochastic recursions. Probab. Theory Related Fields, 145(3):385–420, 2009.*

*[9] D. Buraczewski, E. Damek, T. Mikosch. The Stochastic Equation X =**d**AX + B, work in progess.*

[10] D. Buraczewski, E. Damek and M. Mirek. Asymptotics of stationary solutions of multivariate stochastic
**recursions with heavy tailed inputs and related limit theorems. Stoch. Proc. Appl. 122, 42–67, 2012.**

*[11] B. Chauvin, Q. Liu, N.Pouyanne. Limit distributions for multitype branching processes of m-ary search trees.*

**Ann. Inst. Henri Poincare Probab. Stat. 50(2), 628-654, 2014**

[12] J. F. Collamore and A. N. Vidyashankar. Tail estimates for stochastic ﬁxed point equations via nonlinear
*renewal theory. Stochastic Process. Appl., 123(9):3378–3429, 2013.*

[13] J. F. Collamore, G.Diao and A. N. Vidyashankar. Rare event simulation for processes generated via sto-
*chastic ﬁxed point equations. The Annals of Applied Probability, 24(5), 2143-2175.*

**[14] P. Diaconis, D. Freedman. Iterated random functions. SIAM Rev. 41 (1999), no. 1, 45–76 (electronic);**

*[15] M. Duflo Random Iterative Systems. Springer Verlag, New York, 1997.*

*[16] J. H. Elton. A multiplicative ergodic theorem for Lipschitz maps, Stoch. Process. Appl., 34, 39–47, 1990.*

[17] N. Enriquez, C. Sabot, and O. Zindy. A probabilistic representation of constants in Kesten’s renewal theorem.

*Probab. Theory Related Fields, 144(3-4):581–613, 2009.*

*[18] H. Furstenberg, H.Kesten. Products of random matrices Ann. Math. Statist. , 31: 457–469, 1960.*

*[19] C.M. Goldie. Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab., 1(1):126–*

166, 1991.

*[20] A.K. Grincevicius. On limit distribution for a random walk on the line. Lithuanian Math. J. 15 (1975), 580–589*
*(English translation).*

[21] Y.Guivarc’h, E. Le Page. Spectral gap properties and asymptotics of stationary measures for aﬃne random
*walks. Annales IHP, accepted, arXiv:1204.6004., 2012*

[22] Y.Guivarc’h, E. Le Page. On the homogeneity at inﬁnity of the stationary probability for an aﬃne random
*walk. Contemporary Mathematics 631 (2015), 119-130*

[23] H.Hennion, L.Herv´*e. Central limit theorems for iterated random Lipschitz mappings. Ann. Probab., 32(3A),*
*1934-1984, 2004.*

*[24] H. Kesten. Random diﬀerence equations and renewal theory for products of random matrices. Acta Math.,*
131(1):207–248, 1973.

[25] C. Kl¨*uppelberg, S. Pergamenchtchikov. The tail of the stationary distribution of a random coeﬃcient AR(q)*
**model. Ann. Appl. Probab. 14 (2004), no. 2, 971–1005.**

*[26] M. Meiners, S. Mentemeier. Solutions to complex smoothing equations. arXiv 1507.08043v1.*

*[27] M. Mirek. Heavy tail phenomenon and convergence to stable laws for iterated Lipschitz maps. Probab. Theory*
*Related Fields 151(3-4), 705–734, 2011.*

[28] S. T. Rachev and G. Samorodnitsky. Limit laws for a stochastic process and random recursion arising in
*probabilistic modelling. Adv. in Appl. Probab., 27(1):185–202, 1995.*

[29] W.Vervaat. On a stochastic diﬀerence equation and a representation of non-negative inﬁnitely divisible random
*variables. Adv. Appl. Prob. 11 (1979), 750–783.*

D. Buraczewski, E. Damek, and J. Zienkiewicz, Instytut Matematyczny, Uniwersytet Wroclawski, 50- 384 Wroclaw, pl. Grunwaldzki 2/4, Poland

*E-mail address: dbura@math.uni.wroc.pl, edamek@math.uni.wroc.pl, zenek@math.uni.wroc.pl*