• Nie Znaleziono Wyników

Gaussian Markov processes on partially ordered setsAbstract. In the present paper we characterize Gaussian Markov processes on partially

N/A
N/A
Protected

Academic year: 2021

Share "Gaussian Markov processes on partially ordered setsAbstract. In the present paper we characterize Gaussian Markov processes on partially"

Copied!
9
0
0

Pełen tekst

(1)

ANNALES SOCIETATIS MATHEMATICAL POLONAE Series I: COMMENTATIONES MATHEMATICAL XXIII (1983) ROCZNIKI POLSKIEGO TOWARZYSTWA MATEMATYCZNEGO

Séria I: PRACE MATEMATYCZNE XXIII (1983)

Ng u y e n Va n Th u

(Wroclaw)

Gaussian Markov processes on partially ordered sets

Abstract. In the present paper we characterize Gaussian Markov processes on partially ordered sets. Furthermore, we prove that every regular Gaussian Markov process j.x( ! on Rd can

t

be represented as a random series x, = /i(t) £ K J GncHl ( t e R dY, where [rnj is a sequence of independent random variables with distribution N (0, 1), j</„) is a complete orthonormal system in L 2(fi),

Ц

is a Borel measure on Rd and

h{-)

is a deterministic function on Rd. In particular, if

£ v„ j g„dfi on every compact subset of Rd converges uniformly with probability one.

Let (T, be a partially ordered set such that for any /, ,se Tthere exists the infimum а л b.

A real-valued Gaussian process ixtjteT with mean zero is said to be Markov if and only if for arbitrary t, s e T

Let B(t, 5) = Extxs (t, s e T) be the covariance function of {л:,}. Throug­

hout this paper we suppose that for arbitrary chosen t , s e T

By virtue of (1) and (2) it follows that {.xt| process is Markov if and only if for any t, s, u e T such that и ^ s the equation

For an arbitrary but fixed point t0e T we define two auxiliary functions /7 and <7 by

the covariance function of a Gaussian Markov process is continuous, then the series

(1) E lx tI xu: и ^ s] = E [x t\ xIAJ .

(2) B(t, s) ф 0.

(3)

holds.

B(r, u)B(t A S, t A s) = B [t, t A s)B{t A ,S, 17)

(4)

B(t, t0)B(t A tÿ, 1 a t0)h{t0)

B2(t A t0, t0) and

(5)

(feT ),

(2)

where the value h ( t 0) can be arbitrary chosen such that h ( t 0 ) Ф 0. Then for any t, s e T the equation

( 6 )

B ( t , s ) = ” 7 7 ^ 4 h ( t ) h ( s) n(t л s)

holds.

To prove this we consider the following steps:

(i) For every t e T we have, by (4) and (5), the equations

g ( t

a

t0) h ( t o) = B(t

A

t0, t A t0)h(t0)B 2(t

A

t0, t0)

B(t A t0, to)B(t A to, t A t0)h(t0) — B(t À tp, t0)- (ii) For every t e T we have

Bit

a

t0, t

a

t0) , , . ff(r л r0,A(r)= А((л(о)... r n

_ B (t A t0, t o) B(t, t0)B(t A t0, t A to) hito) -I (to) B2(t A to, to)

B (t, t0)B(t A tp,t A to) B(t A t0, tp) From (3) it follows that

g(t

a

t0)h(t) = B(t

a

t0, t).

(iii) Suppose that t ^ s, then we have t

a u

0 ^ s

a u

0 ^ u0 which to­

gether with (3) implies

B(u0, t

a

u0)B(s

а

щ , s

a

u0) B(s A U0, t A

M 0 ) = --- ---Г --- .

B( u0, s A U0)

(iv) Suppose that t ^ s, then t

a

u0 ^ s

a

u0 ^ s and by (3) we have B(s, s

a

u0)B(t

a

u0, s

a

u0)

B(S, t A

U0 )

= —---- — --- ---".

B(s A u0, s A U0)

(v) Suppose that t ^ s, then t

a u0

^ t ^ s and by (3) we have B(s, t

a

u0) B(t, t)

Bi>’ S)= B(t, t А

И 0 )

• Now by (iii) and (iv) we have

B(s, s

а

и

о)

B{t

A

S

A

u0) B ( t , t) Bit, s) =

B ( s A Up, s A U0) B ( t , t A U0)

B(S, S A U0) B ( U 0 , t A U0) B ( S A U0 , S A U0) B ( t , t)

(3)

Gaussian Markov processes

271

( 7 )

Finally, by (i) and (ii), we have

g{s

л

u0) h( s) g{ t

a

u0) h ( u 0)g{t)h{t)

B(t, S) = --- Z---, л---ггт ; ~~ = 9 (t)h(s).

g{t

л

u0) h( t )g ( s

л

u0) h { u

0)

Thus, we have proved that if t ^ s, then

£(L s) = g{t)h(s).

In general, let f, s e T b e arbitrary. Then equation (3) implies B(t, t

a

s)B{s, t

a

s)

B(t, s) =

B(t A S, t A S) Hence and by (7) we have

B(t, s) = g{t

a

s)h(t)g{t

a

s)h(s) g(t

a

s)h(t

a

s

)

g(t л s)

h(t

a

s

)

h{t)h{s) which completes the proof of equation (6).

Now putting /(f) = g(t)/h{t) (t e T ) we have the following theorem:

T

heorem

1. All solutions o f the functional equation (3) satisfying the conditions B(t, s) = B(s, t) Ф 0 (r, s e T ) are o f the form

(8) B ( t , s ) = f { t

a

s)h(t)h(s).

Rem ark. Representation (8) is unique in the sense that if B{t, s)

= f'(t

a

s)h'(t)h'(s), then there exists a constant c such that / (f) = cf'{t) and h(t) — (1/c2) h'(t) for all teT .

It is clear that the right-hand side of (8) is a covariance function of a Markov process if and only if the function f ( t A s ) defined on T x T is positive definite.

For every a e T le t J a denote the cone up to a, i.e., J a = {te T : t ^ a}.

Let # be the ring generated by all such cones.

T

heorem

2. Let f he a real-valued function defined on T. Then the following conditions are equivalent:

(i) The function (p{ -, •) defined on Tx T by (p{a, b) = f {а л b) (a, b e T ) is positive definite.

(ii) There exists an additive measure p on the ring M such that p {Ja)

= f (a) for every a e T . i

(iii) For any a, a t , ..., a„e T (n = 1 ,2 ,...) such that a ^ a} (j = 1, 2, ..., n) the inequality

f (a) ^ Z ( - D k+1 Z / К л «i2 л ... л а,к)

к= 1 (i,.i,----,*fc) «={1.2...n!

holds.

(4)

Proof. (i)=>(ii). Let ^ be a unique additive set function defined on A such that /x(Jj = f( a ) (a e T ) (in general ц can be signed). For every set B e.A the indicator X b can be represented as

П

X

b

= ( Z Z j X j J 2 with £j== ± 1 . j= i J

Hence and by the positive definite of q>

n n

f i ( £ ) = f Zb d l i = j

Z

£ i W j a X j a =

Z

W V V a ^ J a j )

i j = 1 ' J i . j = 1

n

= Z e«£j/(ai л ^ o.

i j = 1

Consequently, ц is really an additive measure on and fi(Ja) = f( a ) for every aeT .

(ii)=>(i). Let ц be as above; then for any real numbers Âx, Л2, . . Àn we have

Z hhf(ai л aj) = I ( Z ÀiXjJ2dn ^ 0.

i . j = l i = l

Therefore the function q> is positive definite.

(ii)=>(iii). For a ^ a x, a 2, . ■ a„ we have

Ja) = /(<*)+ Z (-!)* Z /(a;i л a,-2 л ... л aj ^ 0.

' = 1 * = 1 O', ,i’2— ,ifc) c î l , 2 n(

(iii)=>(ii). Let /x be an additive set function on .A such that /х(Уа) = (p(a) for every aeT . We show that,, for every B e А, ц(В) ^ 0.

Accordingly, the indicator X b can be represented as П

X b = Z еД^а

j=i 7 with f.j = ± 1.

Hence,

X b = I

j

.,+ I l(e

J = 1 Л - i “* 0 j . J 2) с {1,2,...,и}

A + £A )l X

( X .

\ U X j +

Consequently, by (iii), ц(В) = \хв^В^ 0. The theorem is thus proved.

Now we consider the Euclidean space case. Let T = Rd (d ^ 1) and ^ be a natural partial ordering in Rd. For every d-dimensional rectangle / of the form I = (a1, fi1] x(a2, b2] x ... x(ad, bd~\ we define Ajf ( t ) = f ( t \ t 2,...

..., Т + Ы -а \ ..., td) - f { t ) (/ = 1 ,2 ,..., d and r = (r 1, z2,..., td) e R J).

Furthermore we define

d /(f) = ^ 2...4 d/(f) (fe * -).

(5)

Gaussian Markov processes

273

C

orollary

1. A function cp defined on Rd x R d by q>(a, b) = f ( g л b) is positive definite if and only if:

(a) For any a, b e R d such that a ^ b 0 </ (a)< /(b).

(/?) For each rectangle I in Rd and t e R d A f( t ) > 0 .

i

Proof. The “only if’ part is clear. To prove the “if’ part we suppose that (a) and (/1) are satisfied. For any a lt a2, a „ in Rd we put

П b = a t v a2 v ... V a„ (n = 1, 2, ...). It is easily seen that the set J b\ (J J a.

j - 1 J

can be represented as a disjoint sum of some rectangles in Rd. Consequently, for an additive set function p on & such that p (J0) = f{ a ) (a e R d) we have

R(Ja\ Ü Ja ) = X RW = Z * /(*/) > 0 {tj E Rd).

7=1 / J 1

Hence and by (a) for every a ^ b

f ( a ) > f ( b ) > £ ( - l ) i+1 £ / ( * , / * , , л ... л ^ ) k= 1 (ivi2,...,ik) <={l,2,...,n}

which, by Theorem 2, follows that the function <p is positive definite. The corollary is thus proved.

In what follows we shall deal with Gaussian Markov processes on Rd such that the covariance function B ( , ■) are left-continuous. It is clear that if the last condition is satisfied, then the functions / and h appearing in (8) are left-continuous. Moreover, if B( , ) is continuous, then so are / and h.

Let us denote by [xf] the Hilbert space generated by all random variables xt, t e Rd, under the square-norm convergence.

A Gaussian Markov process {* (}ГбЛ<( with a left-continuous convariance function is called regular if for every j = 1, 2, ..., d and t e R d

lim E [xf| хи : и ^ s] = 0,

s.-> — 00

J

where s = (s1, s2, ..., sd) and the limit is taken in [xf]. Furthermore, it is called singular if for any t , s e R d

E [x,| xu : и ^ s] = xt.

By a standard method we can prove that every Gaussian Markov process {* f}Iel?d with a left-continuous covariance function can be decom­

posed into two independent Gaussian Markov processes such that the first one is regular and the second is singular. Moreover, if the covariance

6 — Prace Matematyczne 23.2

(6)

function of {x,ï is continuous, then so are the covariance functions of its components.

Lf m m a

1. A Gaussian Markov process

{ * t ] teJ?d

is regular if and only if there exists a Borel measure p (perhaps infinite) in Rd such that for every t e R d

Consequently, the process [x,| is regular if and only if the covariance function is left-continuous and for every j = 1, 2, . d

The last conditions, according to Theorem 2 and Corollary 1, are equivalent to the existence of a measure p on Borel subsets of Rd such that f(t) = p (Jt) (t e R d). Thus the lemma is proved.

Let us note that the measure p in Lemma 1 is uniquely determined up to a positive coefficient, and in the sequel we shall call it an associated measure of the Gaussian Markov process {x,}.

Let L2(p) denote the Hilbert space of Borel functions ф on Rd such that J' \ф\2 dp < oc. Since the set of functions { xj : t e R d] is linearly dense in

R d

L 2(p) and the set of random variables (x,: tGRd} is linearly dense in [xf]

and for any t, s e R d

there exists an isometric isomorphism ç from L 2(p) onto [xt] such that for every t e Rd

Let g t, g 2,... be a CONS in L 2(p) and let V„ = Ç(g„) (n = 1, 2,...).

Then 1 V„}„= i, 2 ,... is a CONS in [ x j. Using the Parseval identity Xjt =

X

У*

J

Ъ,'Уп^В (t£ R d) and the isomorphism ç we get a represen-

n = 1 R d

tation of {x(J process as follows

7 ( 0 = М Л ) ,

where f is the function appearing in (8).

Proof. For any f, s e R d we have the equations

F (E [xt| xu: и ^ x])2 = E(E [x,| x ,AS])2 = h2(t)f(t л s).

lim J (f

a

s) — 0.

( 9 )

(7)

Gaussian Markov processes

275

where the series on the right-hand side of (9) is convergent in [x,]. Thus we have proved the following theorem:

T

heorem

3. Let { * f}leJ?d be a regular Gaussian Markov process % and let p he its associated measure. Then there are CONS’s {gn} in L 2(p) and [r„] in [xf] such that for every t e R d equation (9) holds.

Remark. The Kolmogorov three series theorem gives the convergence whh probability 1 of series (9) for a fixed t e R d.

Let us fix a bounded rectangle / = [л1, b1] x la 2, b2~\ x ... x [ad, bd~\ and for every number p ^ 1 let us denote by Lp(dt) the Banach space of all Borel -functions ф on I such that J \ij/(t)]pdt < oo.

/

T

heorem

4. Let { x ,} (6KI1 be a regular Gaussian Markov process. Then for every p ^ 1

(10) Р {ш : X-(co)/hi, e L p(dt)} = 1

and the series êon the right-hand side o f (9) converges in Lp(dt) with pro­

bability 1.

Proof. Equation (10) follows from the fact that for every p ^ 1 the process \xjh(t)\p is a semi-martingale on Rd. \

* к t

To prove the second assertion let us denote sk(t) = £ V„ J g„dg

n =

1

— X

(k = 1, 2 ,...). Then sk(-), к = 1, 2, ..., are symmetric Z^(d£)-valued random variables. For every Borel function и e I?p(dt) = Lq(dt) with q > 0 and

l/p+ 1 /q = 1 we have

as к -> + oo because

X.

V / ~ 7 ^ ~ sk(0

g(t) Iм (01 dt

J E{ w r StU})

' Z ( f g ndn)2 \u(t)\dt-+0

j —к + 1 - x

J î ( ] Ondp)2 (te l).

V j = k + 1 - ce

Hence and by It-Nisio theorem ([3], Theorem 4.1), sk(t) ->xjh(t) in Lp(dt) with probability 1. The theorem is thus proved.

We now proceed to consider Gaussian Markov processes with con-

(8)

tinuous convariance functions. As before we fixed a rectangle I = [a1, b1] x x [a 2, b2] x ... x [V, я/}. Let be a regular Gaussian Markov process with continuous convariance function B { -, •). Using the function /in (8) we define a congruence relation on I as follows: For any t, s e I, t ~ s if and only if/ (t) + f (s) —2/(t л s) = 0. Then the quotient space s defined by //_ is a compact metric space with a metric q defined by

(П) e(M> M ) = v 7/ ( 0 + / ( « ) - 2 / (f

a

s), where t, s e l and [r], [s] are equivalence classes in S.

For every £ > 0 there exists a finite number of balls in S of radius at most £ such that S is covered by these balls. Let N (fi) be the smallest number of such balls. Then log N(e) is called the £-entropy of S.

Lemma

2. The metric q on S defined by (11) induces a e-entropy which satisfies the exponential bound

( 12 ) log N(e) < k/e

for some fixed constant к > 0 and fo r all sufficiently small e > 0.

Proof. Let p be the associated measure of (xt) process. Put v = p(I), к

= dv and n = [v/£]-(-l, where [у/e] is the integral part of the number v/e.

Further, chose aj = a{ < a?2 < ...* < а{ = У simultaneously for j = 1, 2, . . d such that f (ar+l)—f (ar) = e for r = 1, 2, ..., n - 2 and/(an)-/ (a „ _ !) < £, where ar = (ar\ a2, ..., df) (r = 1, 2, ..., n). It is clear that such partitions induce a partition of / into nd subrectangles J l5 /2, . . 1^ such that p{Ir) ^ £ (r = 1, 2, ..., nd). Put Ir = \_Uy , frx] x ... x [ud, if] (r = 1 ,2 ,..., nd) we infer that the balls Sr with centers [tr] e S and radius e (r = 1, 2, ..., nd) cover the set S. Now by the obvious relations edv/E ^ (v/e+ l)d ^ ([v/£] + l)d

= nd ^ N(e) we conclude that ^ N(e) and (12) holds. The lemma is thus proved.

T

heorem

5. Let (xcf}(Gj?d be a Gaussian Markov process with continuous covariance function. Then its sample functions are continuous with probability one.

Proof. It suffices to prove the theorem for a regular Markov process

{ x t } teRd-

Moreover, it suffices to prove the theorem for process

{ z t} teRd

defined by zt = j ^ j (f e ^)> where h is the function appearing in (8). Consider a x.

rectangle / as in Lemma 2. Since Eztzs — f { t л s) the process {ztj induces a

congruence relation ~ in I and a metric q in S = I / „ . By Lemma 2 and by

Strassen’s results ([1], Lemma 2.2), it follows that almost all pathwises of the

process {z[t]} (M eS ) are continuous. Consequently, almost all pathwises of

the process {zt} ( t e l) are continuous. The theorem is thus proved.

(9)

Gaussian Markov processes

277

continuous covariance function. Then series (9) on every compact subset o f Rd converges uniformly with probability one.

Proof. It suffices to show the uniform convergence of (9) over every rectangle 1 in Rd.

Consider the partial sums

f! t

Sn(t) = Z vk S 0kdp

k = 1 — oo

of random variables with values in Banach space C (I) of all continuous functions on I. For a given signed measure т on I we have the inequalities

00 , t I Ô Ô t

£ l f ( Z vk j 9kdl*h(dt)\^$ E( £ vk J gkd p f \x\(dt)

I к = л + 1 — oo I V к = n + 1 >— oo

as n -> oo because

= 1

X ( J Я ьЛ ц)1 | t | ( k ) - > 0

к = п + 1 — oo

X ( J ÿkdvf < vW,)

— 00 ( te l ).

00 f

Consequently, lim £ | f( ^ t)k f gkdp)x(dt)\ = 0 for every теС*(7).

k ~ * x i k = n + 1 - o o

Since the convergence in L t -metric implies the convergence in probability and sample functions of the process xjh(t) are continuous with probability one we infer that j S„(t)z(dt)

Л * t

(dt) in probability. Finally, Theorems 4.1 and 3.1 ([3]) applied to this give the uniform convergence of series (9) with probability one.

References

[1 ] S. M. B e rm a n , Som e continuity properties o f Brownian motion with time param eter in H ilbert space, Trans. Amer. Math. Soc. 131 (1968), p. 182-198.

[2 ] Z. C ie s ie ls k i, Brownian m otion with a several-dim ensional time, Bull. Acad. Polon. Sci. Ser.

sci. math., astr. et phys. 31 (1973), p. 629-635.

[3 ] К. I to, M. N isio , On the convergence o f sums o f independent B an ach space valued random variables, Osaka J. Math. 5 (1968), p. 35-48.

[4 ] W. T im o s z y k , A characterization o f Gaussian processes that are M arkovnian, Colloq.

Math. 30 (1974), p. 157-167.

INSTITUTE O F MATHEMATICS O F THE WROCLAW UNIVERSITY

Cytaty

Powiązane dokumenty

The agent uses the Markov decision process to find a sequence of N c actions that gives the best perfor- mance over the control horizon.. From the graphical viewpoint of Markov

■ budowa cyklu świńskiego w zakresie zmian pogłowia trzody chlewnej i produkcji (wielkość fluktuacji, ilość i długość cyklu, kierunek i tempo zmian) była indywidualna

Two definitions of compactness and tower compactness (which are equivalent for topological spaces) for partially ordered sets are introduced and results are obtained

brzeskim, w biednej, wielodzietnej rodzinie chłopskiej, w czasie kiedy Polski jeszcze nie było, ale był już w Galicji zorganizowany ruch ludo­ wy i stale

Trafnie istoty kumulatywnej kwalifikacji czynu nie upatruje się tylko w zamachu przez czyn na różne dobra prawne, ale także w jakości tych zamachów 29..

[r]

In Table III the expectation values for the lithium atom computed directly from the 2000-term ECG wave function and by means of the regularization methods are confronted with the

In order to fit the deterioration process to the available data, several statistical models and corresponding estimation methods have been proposed to determine the optimal values