POLONICI MATHEMATICI LXVIII.1 (1998)
Randomly connected dynamical systems
—asymptotic stability
by Katarzyna Horbacz (Katowice)
Abstract. We give sufficient conditions for asymptotic stability of a Markov operator governing the evolution of measures due to the action of randomly chosen dynamical systems. We show that the existence of an invariant measure for the transition operator implies the existence of an invariant measure for the semigroup generated by the system.
0. Introduction. Let Π
k: R
+× Y → Y , k = 1, . . . , N , be a sequence of semidynamical systems and [p
ks]
Nk,s=1, p
ks: Y → [0, 1], be a matrix of probabilities (see (8)). Let {t
n}, n = 1, 2, . . . , be a sequence of random variables such that the increments ∆t
n= t
n− t
n−1are independent and have the same density distribution function g(t) = ae
−at.
The action of randomly chosen dynamical systems can be roughly de- scribed as follows. We choose an initial point x
0∈ Y . Next we randomly select an integer from {1, . . . , N } in such a way that the probability of choos- ing k
1is p
k1(x
0). When k
1is drawn we define
X(t) = Π
k1(t, x
0) for 0 ≤ t ≤ t
1, x
1= X(t
1).
Having x
1we select k
2with probability p
k1k2(x
1) and we define X(t) = Π
k2(t − t
1, x
1) for t
1< t ≤ t
2, x
2= X(t
2) and so on.
In many applications we are mostly interested in the values of the solu- tion X(t) at the “switching” points t
n. Thus we will consider the sequence
x
n= X(t
n) for n = 0, 1, . . . Denoting by µ
n, n = 0, 1, . . . , the distribution of x
n, i.e.
µ
n(A) = prob(x
n∈ A), A ∈ B(Y ), n = 0, 1, . . . ,
1991 Mathematics Subject Classification: Primary 47A35; Secondary 58F30.
Key words and phrases : dynamical systems, Markov operator, asymptotic stability.
This research was supported by the State Committee for Scientific Research (Poland) grant no. 2P03A04209.
[31]
we will give conditions that ensure the weak convergence of {µ
n}. Further- more, we will show that the stochastic process X(t) generates a semigroup {P
t}
t≥0of Markov operators which has an invariant measure.
In this paper an important role is played by the transition operator P given by the relation µ
n+1= P µ
nwhere
µ
n(A × {s}) = prob(x
n∈ A and x
n= Π
s(∆t
n, x
n−1)), n = 1, 2, . . . First, from the asymptotic stability of P follows the weak convergence of {µ
n}. Second, we reduce the problem of the existence of an invariant measure for the semigroup {P
t}
t≥0to the problem of the existence of an invariant measure for P .
The organization of the paper is as follows. Section 1 contains some no- tation and definitions from the theory of Markov operators. In Section 2 we specify the problem to be considered. The relationship between the transi- tion operator and the semigroup generated by the process X(t) is formulated in Section 3. In Section 4 we give sufficient conditions for asymptotic stabil- ity of the transition operator P . Section 5 contains the proofs.
1. Preliminaries. Let (Y, ̺) be a metric space. Throughout this paper we assume that Y is locally compact (bounded closed subsets are compact).
We denote by B(Y ) the σ-algebra of Borel subsets of Y and by M(Y ) the family of all finite Borel measures (nonnegative, σ-additive) on Y . We denote by M
1(Y ) the subset of M(Y ) such that µ(Y ) = 1 for µ ∈ M
1(Y ).
The elements of M
1(Y ) will be called distributions. Further, M
sig(Y ) = {µ
1− µ
2: µ
1, µ
2∈ M(Y )}
is the space of finite signed measures.
As usual, B(Y ) denotes the space of all bounded Borel measurable func- tions f : Y → R, and C(Y ) the subspace of all bounded continuous functions with supremum norm k·k
C. We denote by C
0(Y ) the subspace of C(Y ) which contains functions with compact support.
For f ∈ B(Y ) and µ ∈ M
sig(Y ) we write hf, µi =
\
Y
f (x) µ(dx).
We say that a sequence {µ
n}, µ
n∈ M
1(Y ), converges weakly to a mea- sure µ ∈ M
1(Y ) if
n→∞
lim hf, µ
ni = hf, µi for f ∈ C(Y ).
In the space M
sig(Y ) we introduce the Fortet–Mourier norm [1], [8] by setting
kµk
F= sup{hf, µi : f ∈ F}
where
F = {f ∈ C(Y ) : |f (x)| ≤ 1 and |f (x) − f (y)| ≤ ̺(x, y) for x, y ∈ Y }.
The space M
1(Y ) with metric kµ
1− µ
2k
Fis a complete metric space and the convergence in this metric coincides with the weak convergence.
A linear mapping P : M
sig(Y ) → M
sig(Y ) is called a Markov operator if P (M
1(Y )) ⊂ M
1(Y ). Thus, for every distribution µ the measure P µ is also a distribution.
A measure µ
∗∈ M(Y ) is called invariant or stationary with respect to a Markov operator P if P µ
∗= µ
∗. A stationary probability measure is called a stationary distribution.
A Markov operator P is called a Feller operator if there is an operator U : B(Y ) → B(Y ) (dual to P ) such that
(1) hU f, µi = hf, P µi for f ∈ B(Y ), µ ∈ M
sig(Y ) and
(2) U f ∈ C(Y ) for f ∈ C(Y ).
Setting µ = δ
xin (1) we obtain
(3) U f (x) = hf, P δ
xi for f ∈ B(Y ), x ∈ Y, where δ
x∈ M
1(Y ) is the point (Dirac) measure supported at x.
From (3) it follows immediately that U is a linear operator satisfying
(4) U f ≥ 0 for f ∈ B(Y ), f ≥ 0,
(5) U 1
Y= 1
Y.
Further, applying the Lebesgue monotone convergence theorem to the inte- gral hf, P δ
xi, we obtain the following implication:
(6)
f
n∈ B(Y ) f
n+1≤ f
nn→∞
lim f
n(x) = 0
⇒ lim
n→∞
U f
n(x) = 0.
Conditions (4)–(6) are quite important. They allow reversing the roles of P and U . Namely, assume that a linear operator U : B(Y ) → B(Y ) satisfies (4)–(6). Then we may define an operator P : M
sig(Y ) → M
sig(Y ) by setting
(7) P µ(A) = hU 1
A, µi for µ ∈ M
sig(Y ), A ∈ B(Y ).
It is easy to show that P satisfies (1). Moreower, if U satisfies (2) then P is a Markov operator.
A family {P
t}
t≥0of Markov operators is called a semigroup if P
t+s=
P
t◦P
sfor t, s ∈ R
+and P
0= I is the identity operator on M
1(Y ). If all the
P
t, t ≥ 0, are Feller operators, we say that {P
t}
t≥0is a Feller semigroup.
{T
t}
t≥0denotes the semigroup dual to {P
t}
t≥0, i.e.
hT
tf, µi = hf, P
tµi for f ∈ C(Y ), µ ∈ M
1(Y ).
2. Formulation of the problem. In this section we consider the action of randomly chosen semidynamical systems.
Suppose we are given a sequence of semidynamical systems Π
k: R
+×Y
→ Y, k = 1, . . . , N . More precisely, for every k = 1, . . . , N , the mapping Π
ksatisfies the following conditions:
(i) Π
k(0, x) = x for x ∈ Y ,
(ii) Π
k(t, Π
k(s, x)) = Π
k(t + s, x) for x ∈ Y, t, s ∈ R
+, and
(iii) the mapping (t, x) → Π
k(t, x) from R
+× Y into Y is continuous.
Moreover, suppose we are given a probability vector (p
1(x), . . . , p
N(x)), p
i(x) ≥ 0,
X
N i=1p
i(x) = 1 for x ∈ Y, and a probability matrix [p
ij(x)]
Ni,j=1such that
(8) p
ij(x) ≥ 0, X
N j=1p
ij(x) = 1 for x ∈ Y and i, j = 1, . . . , N.
Let {t
n}, n = 1, 2, . . . , be a sequence of random variables such that the increments
(9) ∆t
n= t
n− t
n−1(t
0= 0)
are independent and have the same density distribution function g(t) = ae
−at.
We choose an initial point x
0∈ Y . Next we randomly select an integer from {1, . . . , N } in such a way that the probability of choosing k
1is p
k1(x
0).
When k
1is drawn we define
X(t) = Π
k1(t, x
0) for 0 ≤ t ≤ t
1, x
1= X(t
1).
Having x
1we select k
2with probability p
k1k2(x
1) and we define X(t) = Π
k2(t − t
1, x
1) for t
1< t ≤ t
2, x
2= X(t
2) and so on.
Thus,
X(t) = Π
s(t − t
n−1, x
n−1) for t ∈ (t
n−1, t
n], and
x
n= X(t
n)
with probability p
ks(x
n−1) if x
n−1= Π
k(t
n−1− t
n−2, x
n−2).
In many applications we are mostly interested in the values of the solu- tion X(t) at the “switching” points t
n. Thus we will consider the sequence
x
n= X(t
n) for n = 0, 1, . . . Denote by µ
n, n = 0, 1, . . . , the distribution of x
n, i.e.
(10) µ
n(A) = prob(x
n∈ A), A ∈ B(Y ), n = 0, 1, . . .
We consider the asymptotic behaviour of the sequence {µ
n}. In particu- lar, we give conditions that ensure the weak convergence of {µ
n} to a unique measure µ
∗.
Furthermore, we study the semigroup {P
t}
t≥0of Markov operators on M
1(Y × {1, . . . , N }) generated by the solution X(t) with initial condition X(0) = x. We show that the semigroup {P
t}
t≥0has an invariant measure.
To formulate our criterion for weak convergence of {µ
n} we introduce the following notations. We introduce the class Φ of functions ϕ : R
+→ R
+satisfying the following conditions (see [8]):
(I) ϕ is continuous and ϕ(0) = 0, (II) ϕ is nondecreasing and concave, i.e.
λϕ(z
1) + (1 − λ)ϕ(z
2) ≤ ϕ(λz
1+ (1 − λ)z
2) for z
1, z
2∈ R
+, 0 ≤ λ ≤ 1, (III) ϕ(x) > 0 for x > 0 and lim
x→∞ϕ(x) = ∞.
The family of functions satisfying (I) and (II) will be denoted by Φ
0. An important role in the study of the problem of the convergence of µ
nis played by the inequality
(11) ω(t) + ϕ(r(t)) ≤ ϕ(t) for t ≥ 0.
Lasota and Yorke [8] discuss precisely the cases for which the functional inequality (11) has solutions belonging to Φ.
Case I: Dini condition. Assume that ω ∈ Φ
0satisfies the Dini condition, i.e.
ε\
0
ω(t)
t dt < ∞ for some ε > 0 and r(t) = ct, 0 ≤ c < 1.
Case II: H¨ older condition. Assume that ω ∈ Φ
0and ω(t) ≤ αt
βwhere α, β > 0 are constants. Furthermore, assume that r ∈ Φ
0, r(t) < t and
0 ≤ r(t) ≤ t − t
α+1b for 0 ≤ t ≤ ε,
where α, b, ε > 0 are constants.
Case III: Lipschitz condition. Assume that ω ∈ Φ
0and ω(t) < αt
where α > 0 is a constant and r ∈ Φ
0satisfies 0 ≤ r(t) < t for t > 0,
ε
\
0
t dt
t − r(t) < ∞ for some ε > 0.
If the functions ω and r satisfy the conditions formulated in one of cases I–III, then every solution of inequality (11) belongs to Φ.
Now we assume that a sequence of semidynamical systems Π
k: R
+× Y
→ Y and transition probabilities p
ks: Y → [0, 1] satisfy, for all x, y ∈ Y and i = 1, . . . , N ,
(12)
X
N k=1|p
ik(x) − p
ik(y)| ≤ ω
i(̺(x, y)),
(13)
X
N k=1p
ik(y)̺(Π
k(t, x), Π
k(t, y)) ≤ Le
λtr
i(̺(x, y)),
where the functions ω
i: R
+→ R
+and r
i: R
+→ R
+, i = 1, . . . , N , satisfy the assumptions of one of Cases I–III.
Assume, moreover, that there is a point x
∗∈ Y such that (14) sup{̺(Π
k(t, x
∗), x
∗) : t ≥ 0} < ∞ for k = 1, . . . , N.
Denote by p the matrix [p
ij].
For simplicity we will say that the system
(Π, p)
N= (Π
1, . . . , Π
N; [p
ij]
Ni,j=1)
satisfies conditions (12)–(14) if the semidynamical systems Π
kand the prob- abilities p
kssatisfy the corresponding conditions.
We now formulate the main result of this section.
Theorem 1. Assume that the system (Π, p) satisfies (12)–(14). Assume, moreover , that the constants a > 0, L > 0 and λ ∈ R satisfy
(15) L + λ/a < 1.
If in addition inf{p
ij(x) : i, j ∈ {1, . . . , N }, x ∈ Y } > 0, then there ex- ists a distribution µ
∗such that the sequence {µ
n} defined by (10) is weakly convergent to µ
∗.
The proof will be given in Section 5.
3. The transition operator P and the semigroup {P
t} gener-
ated by X(t). In this section we describe the evolution of the measures
µ
nunder some Markov operator and we determine the semigroup {P
t}
t≥0corresponding to the process X(t).
Let Y = Y × {1, . . . , N } be equipped with the metric ̺ given by
̺((x, i), (y, j)) = ̺(x, y) + ̺
0(i, j) for x, y ∈ Y, i, j ∈ {1, . . . , N } where ̺
0is a metric in {1, . . . , N }.
We define a new sequence of semidynamical systems Π
k: R
+× Y → Y for k = 1, . . . , N by
Π
k(t, (x, s)) = (Π
k(t, x), k).
Now, for an initial point x
0we randomly select an integer k with prob- ability p
k(x
0) and we define x
1= Π
k(t
1, x
0). Next for the pair (x
1, k) we randomly select an integer s ∈ {1, . . . , N } with probability p
ks(x
1), we de- fine
(x
2, s) = Π
s(t
2− t
1, (x
1, k)) and so on. Hence
(x
n, s) = Π
s(∆t
n, (x
n−1, k)), n = 2, 3, . . . , with probability p
ks(x
n−1).
The evolution of the distributions µ
non Y defined by
µ
n(A × {s}) = prob(x
n∈ A and x
n= Π
s(∆t
n, x
n−1)), n = 1, 2, . . . , can be described by a Feller operator P such that
µ
n+1= P µ
n.
It is called the transition operator for this system. To find an explicit form of P , we look for the dual operator U . A straightforward calculation shows that
U f (x, k) = X
N s=1∞\
0
f (Π
s(t, (x, k)))ae
−atp
ks(x) dt (16)
= X
N s=1∞\
0
f (Π
s(t, x), s)ae
−atp
ks(x) dt for f ∈ B(Y ).
Thus (see [4]), we may find P from
P µ(A) = h1
A, P µi = hU 1
A, µi.
This gives
(17) P µ(A) =
N
X
s=1
\
Y
∞\
0
1
A(Π
s(t, (x, k)))ae
−atdt p
ks(x) dµ(x, k)
for µ ∈ M(Y ) and A ∈ B(Y ).
Now we turn to the continuous time case. A probabilistic interpretation of the system is as follows. Let (Ω, Σ, prob) be a probability space. Further, let {t
n}, n = 0, 1, . . . , be the sequence of random variables defined by (9).
We consider a stochastic process X(t) : Ω → Y and a stochastic process ξ(t) : Ω → {1, . . . , N } and we assume that they are related by
ξ(0) = k, X(0) = x, ξ(t) = ξ(t
n), t
n≤ t < t
n+1,
prob{ξ(t
n) = s | X(t
n) = y and X(t
n) = Π
k(∆t
n, X(t
n−1))} = p
ks(y) and
X(t) = Π
ξ(tn−1)(t − t
n−1, X(t
n−1)) for t
n−1< t ≤ t
n.
The pair (X(t), ξ(t)) is a stochastic process on Y . The process (X(t), ξ(t)) generates a semigroup {T
t}
t≥0defined by
(18) T
tf (x, k) = E(f (X(t), ξ(t))) for f ∈ C(Y ), where E(f (X(t), ξ(t))) denotes the expectation of f (X(t), ξ(t)).
It is well known that {T
t}
t≥0is a semigroup of operators from C(Y ) into itself and that for every t ≥ 0 the operator T
tis a contraction, i.e.
kT
tf k
C≤ kf k
C.
Now we define semigroup operators P
t: M
1(Y ) → M
1(Y ) by (19) hP
tµ, f i = hµ, T
tf i for f ∈ C(Y ) and µ ∈ M
1(Y ).
Setting
G(t, (x, k), A) = prob{(X(t), ξ(t)) ∈ A}
we obtain P
tµ(A) =
\
Y
G(t, (x, k), A) dµ(x, k) for µ ∈ M
1(Y ) and A ∈ B(Y ).
Moreover, define
η(t) = X
tn<t
χ(t − t
n), where χ is the Heaviside function
χ(t) =
0 for t < 0, 1 for t ≥ 0.
Evidently η(t) is a Poisson process and there is a constant K
1such that
prob{η(h) ≤ 1} ≥ 1 − K
1h
2.
For the first switching point t
1of the process η(t) we have prob{(X(h), ξ(h)) = (Π
ξ(t1)(h − t
1, Π
k(t
1, x)), ξ(t
1))1
[0,h](t
1)
+ (Π
k(h, x), k)1
(h,∞)(t
1)} ≥ 1 − K
1h
2. Since f ∈ C(Y ) is bounded and t
1has density distribution function ae
−at, we obtain
T
hf (x, k) =
h\
0
X
N s=1f (Π
s(h − t, Π
k(t, x)), s)p
ks(Π
k(t, x))ae
−atdt (20)
+ f (Π
k(h, x), k)e
−ah+ ε
1(h) where |ε
1(h)| ≤ kf k
CK
1h
2.
The following theorem gives a relation between the existence of an invari- ant measure for the operator P and the existence of an invariant measure for the semigroup {P
t} in the case where Y is a Banach space with norm k · k.
Theorem 2. Let P : M
1(Y ) → M
1(Y ) be the operator given by (17) and {P
t}
t≥0be the semigroup of the operators given by (19). Assume that for every k ∈ {1, . . . , N } the derivative with respect to t of the mapping Π
k: R
+× Y → Y is continuous. Assume, moreover , that there are numbers δ, γ > 0 such that
(21) kΠ
s(t, x) − Π
s(0, x)k ≤ γt for t < δ, x ∈ Y and s ∈ {1, . . . , N }.
If µ
∗∈ M
1(Y ) is an invariant measure with respect to the operator P then µ
∗is also P
t-invariant, i.e. P
tµ
∗= µ
∗for every t ≥ 0.
The proof will be given in Section 5.
4. Nonexpansiveness and asymptotic stability of P . In this sec- tion we study the asymptotic behaviour of the transition operator P . The reasons for this study are twofold: First, from the asymptotic stability of P it follows that the sequence {µ
n} of distributions is weakly convergent. Sec- ond, Theorem 2 reduces the problem of construction of an invariant measure for the semigroup {P
t}
t≥0to construction of an invariant measure for P .
A Markov operator is called asymptotically stable if there exists a distri- bution µ
∗such that P µ
∗= µ
∗and
(22) lim
n→∞
kP
nµ − µ
∗k
F= 0 for µ ∈ M
1(Y ).
Our first step in the study of P is to show that it is nonexpansive. Recall that a Markov operator P is called nonexpansive if
kP µ
1− P µ
2k
F≤ kµ
1− µ
2k
Ffor µ
1, µ
2∈ M
1(Y ).
Now we are going to change the metric ̺ in such a way that the properties
of continuity, boundedness and compactness remain the same but the value
kP µ
1− P µ
2k
Ffor µ
1, µ
2∈ M(Y ) could be better evaluated.
For every i ∈ {1, . . . , N } let ϕ
ibe a solution of the inequality (23) ω
i(t) + ϕ
i(r
i(t)) ≤ ϕ
i(t) for t ≥ 0
where the functions ω
iand r
iare given by (12), (13).
Define ϕ : R
+→ R
+by ϕ(t) =
X
N i=1ϕ
i(t) for t ≥ 0, and the metric ̺
ϕon Y by
(24) ̺
ϕ((x, i), (y, j)) = ϕ(̺((x, i), (y, j)))
= ϕ(̺(x, y) + ̺
0(i, j)) for x, y ∈ Y and i, j ∈ {1, . . . , N }.
We will assume that the metric ̺
0has the form
̺
0(i, j) =
c for i 6= j, 0 for i = j, where c is such that ϕ(c) ≥ 2.
We may now formulate the following
Theorem 3. Assume that the system (Π, p) satisfies the inequalities (12) and (13). If the constants a > 0, L > 0 and λ ∈ R satisfy
(25) L + λ/a ≤ 1,
then the Markov operator P given by (17) is nonexpansive with respect to the metric ̺
ϕ.
The proof will be given in Section 5.
We also show that, under additional assumptions, P given by (17) is asymptotically stable.
Theorem 4. Assume that the system (Π, p) satisfies conditions (12), (13) and (15). Assume, moreover , that there is a point x
∗∈ Y for which condition (14) is satisfied. If in addition inf{p
ij(x) : i, j ∈ {1, . . . , N }, x ∈ Y } > 0, then the operator P given by (17) is asymptotically stable.
The proof will be given in Section 5.
5. Proofs. We adopt all the notations of the previous sections. The proof of Theorem 1 is based on Theorem 4. Thus we start with the proofs of Theorems 3 and 4.
Proof of Theorem 3. Denote by k·k
ϕthe Fortet–Mourier norm in M
1(Y )
given by kµk
ϕ= sup{|hf, µi| : f ∈ F
ϕ} where F
ϕis the set of functions such
that |f | ≤ 1 and
(26) |f (x, i) − f (y, j)| ≤ ̺
ϕ((x, i), (y, j)) = ϕ(̺((x, i), (y, j))) for x, y ∈ Y and i, j ∈ {1, . . . , N }.
The operator P is nonexpansive with respect to the metric ̺
ϕif (27) kP µ
1− P µ
2k
ϕ≤ kµ
1− µ
2k
ϕfor µ
1, µ
2∈ M
1(Y ).
In order to verify (27) we will use the adjoint operator. We have kP µ
1− P µ
2k
ϕ= sup{|hf, P µ
1− P µ
2i| : f ∈ F
ϕ}
= sup{|hU f, µ
1− µ
2i| : f ∈ F
ϕ}.
To prove the nonexpansiveness it is sufficient to show that
(28) U (F
ϕ) ⊂ F
ϕ.
Fix f ∈ F
ϕ. Evidently |U f | ≤ 1, so we have to prove that (29) |U f (x, i) − U f (y, j)| ≤ ̺
ϕ((x, i), (y, j))
for x, y ∈ Y and i, j ∈ {1, . . . , N }.
Since by assumption ̺
0(i, j) = c for i 6= j and ϕ(c) ≥ 2, the condition (29) is satisfied for i 6= j. For i = j, we have
|U f (x, i) − U f (y, i)|
≤ X
N k=1∞\
0
|f (Π
k(t, x), k)|ae
−at|p
ik(x) − p
ik(y)| dt
+ X
N k=1∞\
0
|f (Π
k(t, x), k) − f (Π
k(t, y), k)|p
ik(y)ae
−atdt.
Since f ∈ F
ϕ, we obtain
|U f (x, i)−U f (y, i)| ≤ X
N k=1|p
ik(x) − p
ik(y)|
+ X
N k=1∞\
0
X
N l=1ϕ
l(̺(Π
k(t, x), Π
k(t, y)))p
ik(y)ae
−atdt
= X
N k=1|p
ik(x) − p
ik(y)|
+ X
N l=1∞
\
0
X
Nk=1
p
ik(y)ϕ
l(̺(Π
k(t, x), Π
k(t, y)))
ae
−atdt.
Now, for every l ∈ {1, . . . , N } the functions ϕ
lare concave and P
Nk=1
p
ik(y)
= 1, thus using the Jensen inequality we obtain
|U f (x, i)−U f (y, i)| ≤ X
N k=1|p
ik(x) − p
ik(y)|
+ X
N l=1∞
\
0
ϕ
lX
Nk=1
p
ik(y)̺(Π
k(t, x), Π
k(t, y))
ae
−atdt
≤ X
N k=1|p
ik(x) − p
ik(y)|
+ X
N l=1ϕ
l∞\0
X
N k=1p
ik(y)̺(Π
k(t, x), Π
k(t, y))ae
−atdt . Since ϕ
lare nondecreasing, from (12), (13) we obtain
|U f (x, i) − U f (y, i)| ≤ ω
i(̺(x, y)) +
X
N l=1ϕ
l ∞\0
Lae
−(a−λ)tr
i(̺(x, y)) dt . Inequality (25) implies that a > λ and La/(a − λ) ≤ 1. Thus
|U f (x, i) − U f (y, i)| ≤ ω
i(̺(x, y)) + X
Nl=1
ϕ
l(r
i(̺(x, y))).
From this and inequality (23) it follows that
|U f (x, i) − U f (y, i)| ≤ ϕ
i(̺(x, y)) + X
Nl=1l6=i
ϕ
l(r
i(̺(x, y))).
Since r
i(t) ≤ t and ϕ
lare nondecreasing, we obtain
|U f (x, i) − U f (y, i)| ≤ ϕ(̺(x, y)) = ̺
ϕ((x, i), (y, i)).
Proof of Theorem 4. The proof is based on the lower bound technique for Markov operators developed in [6] and [8]. To apply this method we are going to verify the following three properties of the transition operator P .
(i) P is nonexpansive with respect to ̺
ϕ,
(ii) P has the Prokhorov property, that is, for every ε > 0 there is a compact set F ⊂ Y such that
(30) lim inf
n→∞
P
nµ(F ) ≥ 1 − ε for µ ∈ M
1(Y ),
(iii) P satisfies a lower bound condition: For every ε > 0 there is a
β > 0 such that for any two measures µ
1, µ
2∈ M
1(Y ) there exists a Borel
measurable set A with diam
̺ϕA ≤ ε and an integer n
0for which P
n0µ
k(A) ≥ β for k = 1, 2.
Here diam
̺ϕA = sup{̺
ϕ(x, y) : x, y ∈ A}.
It was shown in [8] (Thms. 4.1 and 9.1) that if Y is a locally compact metric space, then conditions (i)–(iii) imply the asymptotic stability of P . In our case Y may be considered with the metric ̺
ϕ.
The nonexpansiveness of the operator P follows from Theorem 1.
To verify (ii) we fix ε > 0 and x
∗∈ Y for which (14) is satisfied.
The system (Π, p) satisfies (13) with r
i∈ Φ
0and 0 ≤ r
i(y) < y for y ∈ R
+, thus
(31) X
N k=1p
ik(x)̺(Π
k(t, x), x
∗)
≤ X
N k=1p
ik(x)̺(Π
k(t, x), Π
k(t, x
∗)) + X
N k=1p
ik(x)̺(Π
k(t, x
∗), x
∗)
≤ Le
λtr
i(̺(x, x
∗)) + X
N k=1p
ik(x)̺(Π
k(t, x
∗), x
∗)
≤ Le
λt̺(x, x
∗) + max
k
̺(Π
k(t, x
∗), x
∗) ≤ Le
λt̺(x, x
∗) + C, where
C = max
1≤k≤N
sup
t≥0
̺(Π
k(t, x
∗), x
∗).
Now define
h(x, i) = ̺(x, x
∗) for x ∈ Y and i ∈ {1, . . . , N }
and set m
n= hh, µ
ni, n = 0, 1, . . . Consider first the case m
0< ∞. Using the recurrence formula µ
n+1= P µ
nand expression (16) for the adjoint operator U we have
m
n+1= hh, P µ
ni = hU h, µ
ni
=
\
Y
X
N s=1∞\
0
h(Π
s(t, (x, k)))ae
−atdt p
ks(x) dµ
n(x, k)
=
\
Y
X
N s=1∞\
0
h(Π
s(t, x), s)ae
−atdt p
ks(x) dµ
n(x, k)
=
\
Y
X
Ns=1
∞
\
0
̺(Π
s(t, x), x
∗)ae
−atdt p
ks(x)
dµ
n(x, k).
From this and inequality (31) it follows that m
n+1≤
\
Y
∞\
0
̺(x, x
∗)Lae
−(a−λ)dt dµ
n(x, k) + C.
Inequality (15) implies that a > λ and
d = La/(a − λ) < 1.
Hence m
n+1≤ dm
n+ C. By an induction argument this gives m
n≤ d
nm
0+ C
1 − d for n = 1, 2, . . .
Since m
0< ∞, there exists an integer n
0such that m
n≤ Γ for n ≥ n
0, where Γ = 1 + C/(1 − d). Using the Chebyshev inequality this implies
µ
n(Y
M) ≥ 1 − Γ/M for n ≥ n
0, where
Y
M= {(x, k) ∈ Y : ̺(x, x
∗) ≤ M }.
Thus, in the case m
0< ∞ the Prokhorov property of P is verified. The general case m
0≤ ∞ can be reduced to the previous one as follows. For given δ > 0 we choose a compact set K ⊂ Y such that µ
0(K) ≥ 1 − δ.
Setting
ν
0(A) = µ
0(A ∩ K)/µ
0(K)
we define a probability measure ν
0supported on K for which the initial moment m
0= hh, ν
0i is finite. Thus, according to the first part of the proof of the Prokhorov property of P there is n
0= n
0(δ) such that
P
nν
0(Y
M) ≥ 1 − Γ/M for n ≥ n
0, M > 0.
Since µ
0(A) ≥ µ
0(A ∩ K), we have
P
nµ
0(Y
M) ≥ µ
0(K)P
nν
0(Y
M) ≥ (1 − δ)(1 − Γ/M ).
Choosing δ sufficiently small and M sufficiently large we obtain P
nµ
0(Y
M) ≥ 1 − ε for n ≥ n
0.
Thus, the operator P given by (17) has the Prokhorov property.
Now we show (iii).
We define the families of functions Π
ktnn...t...k11: Y → Y and Π
tknn...t...k11: Y → Y (t
i∈ R
+, k
i∈ {1, . . . , N }, for i = 1, . . . , n) by the recurrence relations
Π
kt11(x) = Π
k1(t
1, x), Π
ktn...t1n...k1
(x) = Π
kn(t
n, Π
ktn−1...t1n−1...k1
(x)) for x ∈ Y
and
Π
tk11(x, s) = (Π
kt11(x), k
1)
Π
tknn...t...k11(x, s) = (Π
ktnn...t...k11(x), k
n) for (x, s) ∈ Y . Using equation (16) n times, we obtain
(32) U
nf (x, i)
= X
k1...kn
\
R+
. . .
\
R+
| {z }
n
p
ik1(x)p
k1k2(Π
kt11(x)) . . . p
kn−1kn(Π
ktn−1...t1n−1...k1
(x))
× f (Π
tknn...t...k11(x, i))a
ne
−a(t1+...+tn)dt
1. . . dt
n.
By the Prokhorov property there exists a compact set F ⊂ Y such that for every µ ∈ M
1(Y ) there exists an integer n
1= n
1(µ) for which
P
nµ(F ) ≥ 1/2 for n ≥ n
1.
From inequality (15) it follows that there exists t ∈ R
+such that r
0= Le
λt< 1.
Fix ε
1> 0. We can find ε > 0 and an integer m such that ϕ(ε) ≤ ε
1and
(33) r
0mdiam
̺F ≤ ε/4.
According to (13) for every x, y ∈ Y there is j
1such that (34) ̺(Π
j1(t, x), Π
j1(t, y)) ≤ r
0̺(x, y).
By an induction argument for all (x, s), (y, z) ∈ Y there is a sequence j
1, . . . , j
msuch that
̺(Π
t...tjm...j1(x, s), Π
t...tjm...j1(y, z)) = ̺(Π
jt...tm...j1(x), Π
jt...tm...j1(y))
≤ r
m0̺(x, y) ≤ r
0m̺((x, s), (y, z)).
By continuity for all (x, s), (y, z) ∈ F there are neighbourhoods O
(x,s)of (x, s), O
(y,z)of (y, z) and O
tof t such that
̺(Π
tjmm...t...j11(x, s), Π
tjmm...t...j11(y, z)) ≤ r
0m̺((x, s), (y, z)) + ε/4 (35)
≤ r
0mdiam
̺F + ε/4 ≤ ε/2
for (x, s) ∈ O
(x,s), (y, z) ∈ O
(y,z), t
i∈ O
t, where j
1, . . . , j
mdepend on x, y.
Since F
2is a compact set, there is a finite covering
(36) (O
(x1,s1)× O
(y1,z1)) ∪ . . . ∪ (O
(xq,sq)× O
(yq,zq)) ⊃ F
2.
Let µ
1, µ
2∈ M
1(Y ). By the Prokhorov condition there is an integer n = n(µ
1, µ
2) such that
P
nµ
k(F ) ≥ 1/2 for n ≥ n.
Set µ
k= P
nµ
k. Then (µ
1× µ
2)(F
2) ≥ 1/4 and according to (36) there is an integer j = j(µ
1, µ
2) such that
(µ
1× µ
2)(O
(xj,sj)× O
(yj,zj)) ≥ 1/(4q) and consequently, since each µ
kis a probability measure,
µ
1(O
(xj,sj)) ≥ 1/(4q), µ
2(O
(yj,zj)) ≥ 1/(4q).
Let i
1, . . . , i
mbe the sequence corresponding to x
j, y
j. Write for simplicity O
1= O
(xj,sj), O
2= O
(yj,zj)and define A = A
1∪ A
2where
A
k= {Π
timm...t...i11(x, s) : (x, s) ∈ O
kand t
l∈ O
tfor l ∈ {1, . . . , m}}.
Using (33) and (35) we may evaluate the diameter of A in the ̺
ϕmetric:
diam
̺ϕ(A) = diam
̺◦ϕ(A) ≤ ϕ(diam
̺A) ≤ ϕ(ε) = ε
1. Let n
0= n + m. We have
P
n0µ
k(A) = P
mµ
k(A) = hU
m1
A, µ
ki ≥ hU
m1
Ak, µ
ki.
Using equation (32) we obtain P
n0µ
k(A)
=
\
Y
X
k1,...,km
\
R+
. . .
\
R+
| {z }
m
p
sk1(x)p
k1k2(Π
kt11
(x)) . . . p
km−1km(Π
ktm−1...t1m−1...k1
(x))
× 1
Ak(Π
tkmm...t...k11(x, s))a
me
−a(t1+...+tm)dt
1. . . dt
mdµ
nk(x, s).
Setting
σ = inf{p
ij(x) : x ∈ Y, i, j ∈ {1, . . . , N }}, we may estimate P
n0µ
k(A) as follows:
P
n0µ
k(A)
≥ σ
m\
Y
\
R+
. . .
\
R+
| {z }
m
1
Ak(Π
timm...t...i11(x, s))a
me
−a(t1+...+tm)dt
1. . . dt
mdµ
k(x, s)
≥ σ
m\
Ok
\
O¯t
. . .
\
O¯t
| {z }
m
1
Ak(Π
timm...t...i11(x, s))a
me
−a(t1+...+tm)dt
1. . . dt
mdµ
k(x, s)
= σ
mµ
k(O
k)
\O¯t
ae
−atdt
m.
On the other hand, µ
k(O
k) ≥ 1/(4q), thus condition (iii) is satisfied with β = 1
4q σ
m \Ot¯
ae
−atdt
m.
As a consequence the operator P is asymptotically stable in the metric space (Y , ϕ◦̺). But the metrics ̺ and ϕ◦̺ define the same space of continu- ous functions C(Y ), and the weak convergence of a sequence of measures in (Y , ̺) and in (Y , ϕ◦̺) is the same. This proves that P is also asymptotically stable in (Y , ̺).
Proof of Theorem 2. We first show that
(37) lim
h↓0
1
h hT
hf − f, µ
∗i = 0.
From (20) we have T
hf (x, k) =
h
\
0
X
N s=1f (Π
s(h − t, Π
k(t, x)), s)p
ks(Π
k(t, x))ae
−atdt (38)
+ f (Π
k(h, x), k)e
−ah+ ε
1(h), f ∈ C(Y ).
Now, we evaluate the integral on the right hand side of (38). Denote by C
Lthe subspace of C(Y ) which contains functions f : Y → R such that
|f (x, k) − f (y, k)| ≤ L
fkx − yk for (x, k), (y, k) ∈ Y , where L
fis a constant. Assume that f ∈ C
L. Then
|f (Π
s(h − t, Π
k(t, x)), s) − f (Π
k(t, x), s)|
= |f (Π
s(h − t, Π
k(t, x)), s) − f (Π
s(0, Π
k(t, x)), s)|
≤ L
fkΠ
s(h − t, Π
k(t, x)) − Π
s(0, Π
k(t, x))k.
Since there are δ, γ > 0 such that
kΠ
s(τ, x) − Π
s(0, x)k ≤ γτ for x ∈ Y and τ < δ, we obtain
|f (Π
s(h − t, Π
k(t, x)), s) − f (Π
k(t, x), s)| ≤ L
fγ(h − t) for 0 < t < h < δ.
Thus
T
hf (x, k) =
h\
0
X
N s=1f (Π
k(t, x), s)p
ks(Π
k(t, x))ae
−atdt (39)
+ f (Π
k(h, x), k)e
−ah+ ε
2(h)
and lim
h→0ε
2(h)/h = 0. On the other hand, since µ
∗is P -invariant, we obtain
hT
hf, µ
∗i − hf, µ
∗i = hU T
hf, µ
∗i − hU f, µ
∗i
where the operator U : B(Y ) → B(Y ) is given by (16).
From (16) and (39) we obtain (40) U T
hf (x, k)
=
∞\
0
X
N s=1h\
0
X
N i=1f (Π
s(τ, Π
s(t, x)), i)
× p
si(Π
s(τ, Π
s(t, x)))p
ks(x)a
2e
−ate
−aτdτ dt +
∞
\
0
X
N s=1f (Π
s(h, Π
s(t, x)), s)e
−ahae
−atp
ks(x) dt + ε
2(h).
Denote by I
1hf and I
2hf respectively the first and second
T∞
0
integral in (40).
Thus
I
1hf (x, k) =
∞
\
0
X
N s=1 h\
0
X
N i=1f (Π
s(τ + t, x), i)p
si(Π
s(τ + t, x))
× p
ks(x)a
2e
−a(τ +t)dτ dt, I
2hf (x, k) =
∞
\
h
X
N s=1f (Π
s(t, x), s)ae
−atp
ks(x) dt.
To calculate I
1hf and I
2hf − U f , write I
1hf (x, k) =
h\
0
X
N s=1∞\
τ
X
N i=1f (Π
s(t, x), i)p
si(Π
s(t, x))p
ks(x)a
2e
−atdt dτ
= h X
N s=1∞\
0
X
N i=1f (Π
s(t, x), i)p
si(Π
s(t, x))p
ks(x)a
2e
−atdt + ε
3(h) and
I
2hf (x, k) − U f (x, k) = −
h\
0
X
N s=1f (Π
s(t, x), s)ae
−atp
ks(x) dt
= −ha X
N s=1f (x, s)p
ks(x) + ε
4(h)
where lim
h→0ε
i(h)/h = 0 for i = 3, 4. We consider the operator Q : C(Y ) → C(Y ) defined by
Qf (y, l) = a X
N i=1f (y, i)p
li(y) for f ∈ C(Y ).
Now I
1hf and I
2hf − U f may be written in the form
I
1hf (x, k) = hU Qf (x, k) + ε
3(h), I
2hf (x, k) − U f (x, k) = −hQf (x, k) + ε
4(h).
Consequently,
U T
hf (x, k) − U f (x, k) = hU Qf (x, k) − hQf (x, k) + ε
3(h) + ε
4(h).
Furthermore,
1
h hT
hf − f, µ
∗i = 1
h hU T
hf − U f, µ
∗i and
1
h hT
hf − f, µ
∗i = hU Qf − Qf, µ
∗i + 1
h hε
3(h) + ε
4(h), µ
∗i
for f ∈ C
Land h < δ.
Taking the limit as h ↓ 0 we obtain
(41) lim
h↓0
1
h hT
hf − f, µ
∗i = hU Qf − Qf, µ
∗i for f ∈ C
L. On the other hand, since µ
∗is invariant with respect to P , we have
hU Qf − Qf, µ
∗i = hU Qf, µ
∗i − hQf, µ
∗i = hQf, P µ
∗i − hQf, µ
∗i = 0.
From this and (41) we obtain
(42) hAf, µ
∗i = 0 for f ∈ C
1∩ C
Lwhere A denotes the infinitesimal generator for the semigroup {T
t}
t≥0and C
1= {f ∈ C(Y ) : f (·, k) ∈ C
1(Y ) for every k ∈ {1, . . . , N }}.
Moreover,
d
dt T
tf, µ
∗= hAT
tf, µ
∗i for f ∈ C
1. Thus
hT
tf, µ
∗i = const = hf, µ
∗i.
From this and the definition of the semigroup {P
t}
t≥0, we finally obtain hf, P
tµ
∗i = hf, µ
∗i for f ∈ C
1∩ C
Land consequently P
tµ
∗= µ
∗for t ≥ 0.
Proof of Theorem 1. By Theorem 4 the operator P given by (17) is asymptotically stable. Thus there exists an invariant measure µ
∗such that
n→∞
lim hf , µ
ni = hf , µ
∗i for f ∈ C
0(Y ) where µ
n+1= P µ
n. Hence
(43) lim
n→∞
\
Y
f (x, i) dµ
n(x, i) =
\
Y
f (x, i) dµ
∗(x, i) for f ∈ C
0(Y ).
Further, for every f ∈ C
0(Y ) we define f
j: Y → Y , j = 1, . . . , N , by
f
j(x, i) =
f (x) for i = j, 0 for i 6= j.
It is evident that f
j∈ C
0(Y ). From (43) it follows that
n→∞
lim X
N j=1\
Y
f
j(x, i) dµ
n(x, i) = X
N j=1\
Y
f
j(x, i) dµ
∗(x, i).
Consequently,
n→∞
lim X
N j=1f (x)µ
n(dx × {j}) = X
N j=1f (x)µ
∗(dx × {j}).
Setting µ
∗(A) = P
Nj=1
µ
∗(A × {j}) for A ∈ B(Y ) and using the definitions of µ
nand µ
nwe finally obtain
n→∞
lim
\
Y
f (x) µ
n(dx) =
\
Y