• Nie Znaleziono Wyników

Abstract. We study a minimum distance estimator in L

N/A
N/A
Protected

Academic year: 2021

Share "Abstract. We study a minimum distance estimator in L"

Copied!
14
0
0

Pełen tekst

(1)

V. M O N S A N and M. N ’ Z I (Abidjan)

MINIMUM DISTANCE ESTIMATOR FOR A HYPERBOLIC STOCHASTIC PARTIAL DIFFERENTIAL EQUATION

Abstract. We study a minimum distance estimator in L

2

-norm for a class of nonlinear hyperbolic stochastic partial differential equations, driven by a two-parameter white noise. The consistency and asymptotic normality of this estimator are established under some regularity conditions on the coef- ficients. Our results are applied to the two-parameter Ornstein–Uhlenbeck process.

1. Introduction. In recent years, there has been a growing interest in parameter estimation based on the minimum distance technique. For instance, Dietz and Kutoyants (1992, 1997) studied the problem of esti- mation of a parameter by the observations of an ergodic diffusion process.

The Ornstein–Uhlenbeck process with small diffusion coefficients was treated by Kutoyants (1994), Kutoyants et al. (1994), Kutoyants and Pilibossian (1994) and H´enaff (1995). Models for random field diffusions were consid- ered by Kutoyants and Lessi (1995) for the distance defined by Hilbert-type metrics.

The purpose of this paper is to extend their results to a more general class of random fields. More precisely, we deal with the following nonlinear hyperbolic stochastic partial differential equation: for any (t

1

, t

2

) ∈ R

2+

,

2

X

t1,t2

∂t

1

∂t

2

= S

1

0

, t

1

, t

2

) ∂X

t1,t2

∂t

2

+ S

2

0

, t

1

, t

2

) ∂X

t1,t2

∂t

1

(1)

+ S

3

0

, t

1

, t

2

, X) + ε ˙ W

t1,t2

, with the initial condition X

t1,t2

= x on the axes, x ∈ R.

2000 Mathematics Subject Classification: 62M09, 62F12.

Key words and phrases: minimum distance estimator; random fields; small noise;

stochastic partial differential equations.

[225]

(2)

The coefficients are mesurable functions

S

i

: Θ × [0, T

1

] × [0, T

2

] → R, i = 1, 2, S

3

: Θ × [0, T

1

] × [0, T

2

] × C → R,

where Θ ⊂ R

k

and C stands for the set of all continuous real-valued functions defined on [0, T

1

] × [0, T

2

]. { ˙ W

t1,t2

: (t

1

, t

2

) ∈ [0, T

1

] × [0, T

2

]} is a one- dimensional two-parameter white noise. Equations of this kind appear, for example, in the problem of constructing a Wiener sheet on manifolds (see Norris (1995)) and in nonlinear filtering theory for two-parameter processes (see Korezlioglu et al. (1983)). Their solutions are called two-parameter diffusion processes and there are two different approaches to solving them.

The first one was introduced by Farr´e and Nualart (1993). By a solution they mean a random field 

X

t1,t2

: (t

1

, t

2

) ∈  0, T

1



× 

0, T

2



adapted to the natural filtration associated with the Wiener sheet W and satisfying

X

t1,t2

= x +

t1

\

0 t2

\

0

S

1

0

, s

1

, s

2

) X(s

1

, ds

2

) ds

1

+

t2

\

0 t1

\

0

S

2

0

, s

1

, s

2

) X(ds

1

, s

2

) ds

2

+

t1

\

0 t2

\

0

S

3

0

, s

1

, s

2

, X) ds

1

ds

2

+ εW

t1,t2

.

The other one is due to Rovira and Sanz-Sol´e (1995, 1996) who used a method based on the Green function γ

t1,t2

0

, s

1

, s

2

) associated with the second order differential operator

Lf (t

1

, t

2

) = ∂

2

f

∂t

1

∂t

2

(t

1

, t

2

)−S

1

0

, t

1

, t

2

) ∂f

∂t

2

(t

1

, t

2

)−S

2

0

, t

1

, t

2

) ∂f

∂t

1

(t

1

, t

2

).

Note that γ

t1,t2

θ

0

, s

1

, s

2



is the solution to the partial differential equation

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

γ

t1,t2

∂s

1

∂s

2

0

, s

1

, s

2

) + ∂(S

1

0

, s

1

, s

2

t1,t2

0

, s

1

, s

2

))

∂s

2

+ ∂(S

2

0

, s

1

, s

2

t1,t2

0

, s

1

, s

2

))

∂s

1

= 0,

∂γ

t1,t2

∂s

1

0

, s

1

, s

2

) + S

1

0

, s

1

, s

2

t1,t2

0

, s

1

, s

2

) = 0 if s

2

= t

2

,

∂γ

t1,t2

∂s

2

0

, s

1

, s

2

) + S

2

0

, s

1

, s

2

t1,t2

0

, s

1

, s

2

) = 0 if s

1

= t

1

,

γ

t1,t2

0

, s

1

, s

2

) = 1 if s

1

= t

1

, s

2

= t

2

.

(3)

The solution to (1) is defined by

X

t1,t2

= x +

t1

\

0 t2

\

0

γ

t1,t2

0

, s

1

, s

2

)S

3

0

, s

1

, s

2

, X) ds

1

ds

2

+ ε

t1

\

0 t2

\

0

γ

t1,t2

0

, s

1

, s

2

) W (ds

1

, ds

2

).

These two apparently different ways of solving equation (1) can be shown to be equivalent (see Rovira and Sanz-Sol´e 1996, Proposition 2.4).

Now, we state the problem. The coefficients S

i

are supposed to be known but the value of the parameter θ

0

is unknown. Our aim is to estimate θ

0

by an L

2

-minimum distance estimator (MDE) θ

ε

. The case S

2

= 0 was treated by Kutoyants and Lessi (1995).

We define θ

ε

by

θ

ε

= arg inf

θ∈Θ

kX − x(θ)k

L2(µ)

, which means that θ

ε

is a solution to the equation

kX − x(θ

ε

)k

L2(µ)

= inf

θ∈Θ

kX − x(θ)k

L2(µ)

where k · k

L2(µ)

denotes the L

2

(µ)-norm associated with a finite measure µ and {x

t1,t2

(θ) : (t

1

, t

2

) ∈ [0, T

1

]×[0, T

2

]} is the solution of equation (1) when θ

0

is replaced by θ and ε = 0. Let us remark that x

t1,t2

(θ) is a deterministic function.

The rest of the paper is organized as follows. In Section 2, we intro- duce notations and state some conditions on the coefficients which will be used throughout. We also recall some properties of the Green function γ

t1,t2

0

, s

1

, s

2

) and give some preliminary lemmas. Section 3 is devoted to the study of the asymptotic behavior of θ

ε

as ε → 0 through its consistency and asymptotic normality.

As usual, all constants appearing in the proofs are called C, although they may vary from one occurrence to another.

2. Notations and preliminaries. Let R

+

i = [0, ∞) and T = (T

1

, T

2

) ∈ RR

t

stands for the rectangle [0, t

1

] × [0, t

2

]. The set Θ of the parameters is a bounded open subset of R

k

, and ε ∈ (0, 1].

Now, we state the conditions on the coefficients.

• (H1) For any θ ∈ Θ, S

i

(θ, ·), i = 1, 2, is uniformly bounded and has

uniformly bounded derivatives.

(4)

• (H2) There exists a constant C > 0 such for that for any (x, y) ∈ C × C, t ∈ R

T

and θ ∈ Θ,

|S

3

(θ, t, x) − S

3

(θ, t, y)| ≤ C|x

t

− y

t

|,

|S

3

(θ, t, x)| ≤ C(1 + |x

t

|).

• (H3) For any t ∈ R

T

, S

i

(·, t), i = 1, 2, has uniformly bounded first order and mixed second order partial dervatives.

Let ˙ S

3θ

denote the vector function of the derivatives of S

i

, i = 1, 2, with respect to θ:

S ˙

iθ

(θ, t) =

 ∂S

i

∂θ

1

(θ, t), . . . , ∂S

i

∂θ

k

(θ, t)



where A

stands for the transpose of the matrix A.

If S

3

(θ, t, x) = S

3

(θ, t, x

t

), we denote by ˙ S

3θ

(θ, t, x

t

) the derivative of S

3

in x, i.e. S ˙

3θ

(θ, t, x

t

) =

∂S∂x3

(θ, t, x)|

x=xt

. We let ˙ S

3θ

(·, t, x

t

) be the vector function of the derivatives of S

3

(·, t, x) in θ.

• (H4) For any x ∈ C, θ ∈ Θ and t ∈ R

T

, we have S

3

(θ, t, x) = S

3

(θ, t, x

t

) and:

(i) S

3

(θ, t, ·) is differentiable with uniformly bounded derivatives, S ˙

3x

(θ, t, ·) is continuous and there exist constants C > 0, a ∈ (0, 1] and b ∈ (0, 1] such that for any (x, y) ∈ C × C, (θ

1

, θ

2

) ∈ Θ

2

and t ∈ R

T

,

| ˙ S

x3

1

, t, x

t

) − ˙ S

3x

2

, t, y

t

)| ≤ C(|x

t

− y

t

|

a

+ |θ

1

− θ

2

|

b

);

(ii) S

3

(·, t, x

t

) is differentiable, ˙ S

3θ

(·, t, x

t

) is continuous and for any com- pact subset K of R there exist constants C

K

> 0, c ∈ (0, 1] and d ∈ (0, 1]

such that for any (x, y) ∈ K

2

, (θ

1

, θ

2

) ∈ Θ

2

and t ∈ R

T

,

| ˙ S

θ3

1

, t, x) − ˙ S

θ3

2

, t, y)| ≤ C

K

(|x − y|

c

+ |θ

1

− θ

2

|

d

).

Now, we recall the existence and uniqueness result for solutions to equa- tion (1) and some properties of the Green function which will be needed.

Theorem 1. Under (H1) and (H2), there exists a unique continuous random field X = {X

t

: t ∈ R

T

} which is a solution of (1).

P r o o f. See Rovira and Sanz-Sol´e (1995), Proposition 2.1, or Farr´e and Nualart (1993), Theorem 2.1.

Lemma 2. Under (H1) and (H2), we have:

(i) For any θ ∈ Θ and t ∈ R

T

, the function s 7→ γ

t

(θ, s) has uniformly

bounded derivatives of first order and mixed partial derivatives of second

order on {s ∈ R

T

: 0 ≤ s

1

≤ t

1

, 0 ≤ s

2

≤ t

2

}.

(5)

(ii) There exists C > 0 such that sup

θ∈Θ

sup

t∈RT

sup

s∈Rt

t

(θ, s)| ≤ C.

P r o o f. This follows immediately from the boundedness of S

i

, i = 1, 2, and techniques developed in the proofs of Propositions 3.1 and 3.2 of Rovira and Sanz-Sol´e (1995).

Lemma 3. There exists C > 0 such that for any t ∈ R

T

, sup

θ∈Θ

\

Rt

γ

t

(θ, s) W (ds)

≤ C sup

s∈Rt

|W

s

|.

P r o o f. In view of Lemma 2(i), for any θ ∈ Θ and t ∈ R

T

, the function s 7→ γ

t

(θ, s) has uniformly bounded first order derivatives and mixed second order derivatives. Therefore, we have

\

Rt

γ

t

(θ, s) W (ds) = W

t

t1

\

0

∂γ

t

∂s

1

(θ, s

1

, t

2

)W

s1,t2

ds

1

t2

\

0

∂γ

t

∂s

2

(θ, t

1

, s

2

)W

t1,s2

ds

2

\

Rt

2

γ

t

∂s

1

∂s

2

(θ, s)W

s

ds.

The desired result follows immediately.

Lemma 4. Under (H1) and (H2), there exists C > 0 such that sup

θ∈Θ

sup

t∈RT

|X

t

(θ) − x

t

(θ)| ≤ Cε sup

t∈RT

|W

t

|.

P r o o f. We have

|X

s

(θ) − x

s

(θ)| ≤

\

Rs

s

(θ, u)[S

3

(θ, u, X(θ)) − S

3

(θ, u, x(θ))]| du + ε

\

Rs

γ

t

(θ, u) W (du) .

By Lemmas 2(ii) and 3, we have

|X

s

(θ) − x

s

(θ)| ≤ C

\

Rs

|S

3

(θ, u, X(θ)) − S

3

(θ, u, x(θ))| du + εC sup

u∈Rs

|W

u

|.

Let

g

s

= sup

θ∈Θ

sup

u∈Rs

|X

u

(θ) − x

u

(θ)|.

In view of (H2), we have g

t

≤ C

\

Rt

g

u

du + εC sup

u∈Rt

|W

u

|.

(6)

Now, by using the Gronwall Lemma (see Dozzi (1989), p. 91), we deduce that

g

t

≤ εC sup

u∈Rt

|W

u

|.

Let Y = {Y

t

: t ∈ R

T

} be the solution of the following stochastic partial differential equation:

2

Y

t

∂t

1

∂t

2

= S

1

0

, t) ∂Y

t

∂t

2

+ S

2

0

, t) ∂Y

t

∂t

1

+ ˙ S

3x

0

, t, x

t

0

))Y

t

+ ˙ W

t

, with Y

t

= 0 on the axes. We denote by e x

t

(θ) the vector function of the derivatives of x

t

(θ) with respect to θ and put

J(θ) =

\

RT

e

x

t

(θ)e x

t

(θ) dµ(t).

Let

ξ = J(θ

0

)

−1

\

RT

Y

t

x e

t

0

) dµ(t) and Z

t

= ε

−1

(X

t

− x

t

0

)).

Remark 5. Y is a centered Gaussian random field. Therefore, ξ is a centered Gaussian random variable with covariance matrix

Γ = J(θ

0

)

−1

h

\

RT

\

RT

E(Y

s

Y

t

) e x

t

0

)e x

s

0

) dµ(t) dµ(s) i

J(θ

0

)

−1

.

Lemma 6. Under (H1)–(H4), there exists C > 0 such that sup

t∈RT

|Y

t

| ≤ C sup

t∈RT

|W

t

|, (i)

sup

t∈RT

|Z

t

− Y

t

| ≤ Cε

a

sup

t∈RT

|W

t

|

1+a

, (ii)

sup

t∈RT

|e x

t

(θ) − e x

t

0

)|

(iii)

≤ C(|θ − θ

0

| + |θ − θ

0

|

a

+ |θ − θ

0

|

b

+ |θ − θ

0

|

c

+ |θ − θ

0

|

d

).

P r o o f. (i) In view of Lemma 2(ii), Lemma 3 and (H4), we have sup

s∈Rt

|Y

s

| ≤ C

\

Rt

( sup

u∈Rs

|Y

u

|) du + C sup

u∈Rt

|W

u

|.

Now, the Gronwall Lemma leads to (i).

(ii) Let g

t

= |Z

t

− Y

t

|. We have g

t

\

Rt

t

0

, s)|

×|ε

−1

[S

3

0

, s, X

s

) − S

3

0

, s, x

s

0

))] − ˙ S

x3

0

, s, x

s

0

))Y

s

| ds.

(7)

By Lemma 2(ii), we have g

t

≤ C

\

Rt

−1

[S

3

0

, s, X

s

) − S

3

0

, s, x

s

0

))] − ˙ S

x3

0

, s, x

s

0

))Y

s

| ds.

Therefore, there exists e X

s

= x

s

0

) + β(X

s

− x

s

0

)), β ∈ (0, 1), such that

g

t

≤ C

\

Rt

−1

(X

s

− x

s

0

)) ˙ S

x3

0

, s, e X

s

) − ˙ S

x3

0

, s, x

s

0

))Y

s

| ds

≤ C

\

Rt

|[ε

−1

(X

s

− x

s

0

)) − Y

s

] ˙ S

x3

0

, s, e X

s

)| ds + C

\

Rt

|[ ˙ S

x3

0

, s, e X

s

) − ˙ S

x3

0

, s, x

s

0

))]Y

s

| ds.

By using (H4), we obtain g

t

≤ C

\

Rt

( sup

u∈Rs

g

u

) ds + C

\

Rt

| e X

s

− x

s

0

)|

a

|Y

s

| ds.

Now, by Lemma 4, we have sup

s∈Rt

g

s

≤ C

\

Rt

( sup

u∈Rv

g

u

) dv + Cε

a

sup

u∈Rt

|W

u

|

a

sup

u∈Rt

|Y

u

|.

In view of the Gronwall Lemma and (i), we deduce that sup

t∈RT

g

t

≤ Cε

a

sup

t∈RT

|W

t

|

1+a

.

(iii) First of all, let us prove that for any t ∈ R

T

and s ∈ R

t

, the function θ 7→ γ

t

(θ, s) has uniformly bounded derivatives. To this end, recall that

γ

t

(θ, s) = X

n=0

H

n

(θ, t, s), where H

n

is defined by

H

0

(θ, s, t) = 1, H

n+1

(θ, s, t) =

t1

\

s1

S

1

(θ, u

1

, s

2

)H

n

(θ, t, (u

1

, s

2

)) du

1

+

t2

\

s2

S

2

(θ, s

1

, u

2

)H

n

(θ, t, (s

1

, u

2

)) du

2

, n ≥ 0.

By using (H1) and an induction argument, one can prove that for any n ≥ 0

(8)

and s ∈ R

t

,

(2) |H

n

(θ, t, s)| ≤ C

n

X

n j=0

 n j

 (t

1

− s

1

)

j

(t

2

− s

2

)

n−j

j!(n − j)! . Next, let us prove by induction that

(3)

∂H

n

∂θ (θ, t, s)

≤ nC

n

X

n j=0

 n j

 (t

1

− s

1

)

j

(t

2

− s

2

)

n−j

j!(n − j)! . For n = 0, this is obvious. Now,

∂H

n+1

∂θ (θ, t, s) =

t1

\

s1

H

n

(θ, t, (u

1

, s

2

)) ˙ S

θ1

(θ, u

1

, s

2

) du

1

+

t1

\

s1

S

1

(θ, u

1

, s

2

) ∂H

n

∂θ (θ, t, (u

1

, s

2

)) du

1

+

t2

\

s2

H

n

(θ, t, (s

1

, u

2

)) ˙ S

θ2

(θ, s

1

, u

2

) du

2

+

t2

\

s2

S

2

(θ, s

1

, u

2

) ∂H

n

∂θ (θ, t, (s

1

, u

2

)) du

2

. By using (H1), (H2), (2) and the induction hypothesis, we deduce that

∂H

n+1

∂θ (θ, t, s)

≤ (n + 1)C

n+1



t1

\

s1

X

n j=0

 n j

 (u

1

− s

1

)

j

(t

2

− s

2

)

n−j

j!(n − j)! du

1

+

t2

\

s2

X

n j=0

 n j

 (t

1

− s

1

)

j

(u

2

− s

2

)

n−j

j!(n − j)! du

2



= (n + 1)C

n+1

n+1

X

j=0

 n + 1 j

 (t

1

− s

1

)

j

(t

2

− s

2

)

n+1−j

j!(n + 1 − j)! , and (3) is proved.

It follows that

∂H

n

∂θ (θ, t, s)

≤ n(2(T

1

+ T

2

)C)

n

max

j∈{0,...,n}

 1 j! , 1

(n − j)!

 .

Since max

j∈{0,...,n}

(1/j!, 1/(n − j)!) is equal to ((n/2)!)

−2

if n is even, and to (((n + 1)/2)!((n − 1)/2)!)

−1

if n is odd, we deduce that

X

n=0

∂H

n

∂θ (θ, t, s)

≤ C < ∞,

(9)

which implies that θ 7→ γ

t

(θ, s) is differentiable and

∂γ

t

∂θ (θ, s) = X

n=0

∂H

n

∂θ (θ, t, s).

Therefore

sup

θ∈Θ

sup

t∈RT

sup

s∈Rt

∂γ

t

∂θ (θ, s) < ∞.

By (H3), (3) and noting that for any (i, j) ∈ {1, . . . , k}

2

,

2

H

n+1

∂θ

i

∂θ

j

(θ, t, s) =

t1

\

s1

2

S

1

∂θ

i

∂θ

j

(θ, u

1

, s

2

)H

n

(θ, t, (u

1

, s

2

)) du

1

+

t1

\

s1

∂S

1

∂θ

i

(θ, u

1

, s

2

) ∂H

n

∂θ

j

(θ, t, (u

1

, s

2

)) du

1

+

t1

\

s1

∂S

1

∂θ

j

(θ, u

1

, s

2

) ∂H

n

∂θ

i

(θ, t, (u

1

, s

2

)) du

1

+

t\1

s1

S

1

(θ, u

1

, s

2

) ∂

2

H

n

∂θ

i

∂θ

j

(θ, t, (u

1

, s

2

)) du

1

+

t\2

s2

2

S

2

∂θ

i

∂θ

j

(θ, s

1

, u

2

)H

n

(θ, t, (s

1

, u

2

)) du

2

+

t\2

s2

∂S

2

∂θ

i

(θ, s

1

, u

2

) ∂H

n

∂θ

j

(θ, t, (s

1

, u

2

)) du

2

+

t\2

s2

∂S

2

∂θ

j

(θ, s

1

, u

2

) ∂H

n

∂θ

i

(θ, t, (s

1

, u

2

)) du

2

+

t2

\

s2

S

2

(θ, s

1

, u

2

) ∂

2

H

n

∂θ

i

∂θ

j

(θ, t, (s

1

, u

2

)) du

2

, one can prove that for any n ≥ 0, θ ∈ Θ, t ∈ R

T

and s ∈ R

T

,

2

H

n

∂θ

i

∂θ

j

(θ, t, s)

≤ n(n + 1)C

n

X

n l=0

 n l

 (t

1

− s

1

)

l

(t

2

− s

2

)

n−l

l!(n − l)! . It follows that

sup

1≤i,j≤k

sup

θ∈Θ

sup

t∈RT

sup

s∈Rt

2

γ

t

∂θ

i

∂θ

j

(θ, s)

< ∞.

(10)

Now, let ˙γ

t,θ

(resp. ¨ γ

t,θ

) stand for the vector (resp. matrix) function of the first (resp. second) order derivatives of γ

t

in θ. We have

e x

t

(θ) =

\

Rt

S

3

(θ, s, x

s

(θ)) ˙γ

t,θ

(θ, s) ds +

\

Rt

γ

t

(θ, s)[ ˙ S

3θ

(θ, s, x

s

(θ)) + ˙ S

x3

(θ, s, x

s

(θ))e x

s

(θ)] ds.

Therefore

|e x

t

(θ) − e x

t

0

)| ≤

\

Rt

|[S

3

(θ, s, x

s

(θ)) − S

3

0

, s, x

s

0

))] ˙γ

t,θ

(θ, s)| ds +

\

Rt

|S

3

0

, s, x

s

0

))[ ˙γ

t,θ

(θ, s) − ˙γ

t,θ

0

, s)]| ds +

\

Rt

t

(θ, s)[ ˙ S

θ3

(θ, s, x

s

(θ)) − ˙ S

θ3

0

, s, x

s

0

))]| ds +

\

Rt

|[γ

t

(θ, s) − γ

t

0

, s)] ˙ S

θ3

0

, s, x

s

0

))| ds +

\

Rt

t

(θ, s) ˙ S

x3

(θ, s, x

s

(θ))(e x

s

(θ) − e x

s

0

))| ds +

\

Rt

t

(θ, s)e x

s

0

)[ ˙ S

3x

(θ, s, x

s

(θ)) − ˙ S

x3

0

, s, x

s

0

))]| ds +

\

Rt

| ˙ S

x3

0

, s, x

s

0

))e x

s

0

)[γ

t

(θ, s) − γ

t

0

, s)]| ds.

By using (H1)–(H4), the boundedness of ˙γ

t,θ

, ¨ γ

t,θ

and noting that the func- tion (θ, t) 7→ x

t

(θ) is bounded, we deduce that

(4) |e x

t

(θ) − e x

t

0

)|

≤ C 

|θ − θ

0

| + |θ − θ

0

|

b

+ |θ − θ

0

|

d

+

\

Rt

|x

s

(θ) − x

s

0

)| ds

+

\

Rt

|x

s

(θ) − x

s

0

)|

a

ds +

\

Rt

|x

s

(θ) − x

s

0

)|

c

ds

+

\

Rt

|e x

s

(θ) − e x

s

0

)| ds  .

Now, it is not difficult to see that the function (θ, t) 7→ e x

t

(θ) is bounded.

So,

sup

t∈RT

|x

t

(θ) − x

t

0

)| ≤ |θ − θ

0

|.

(11)

Hence, from (4) and the Gronwall Lemma, we deduce that sup

t∈RT

|e x

t

(θ)− e x

t

0

)| ≤ C(|θ −θ

0

|+|θ −θ

0

|

a

+|θ −θ

0

|

b

+|θ −θ

0

|

c

+|θ −θ

0

|

d

).

Now, since the ingredients for the proofs of the main results are assem- bled, we can deal with the asymptotic behavior of the estimator θ

ε

.

3. Results. Let g

θ0

(δ) = inf

|θ−θ0|>δ

kx(θ) − x(θ

0

)k

L2(µ)

and g(δ) = inf

θ0∈K

g

θ0

(δ) where K is an arbitrary compact subset of Θ. The following theorem ensures the consistency of θ

ε

.

Theorem 7. Under (H1) and (H2), there exists a constant C > 0 (in- dependent of K) such that for any δ > 0 and ε ∈ (0, 1],

sup

θ0∈K

P

θ(ε)0

(|θ

ε

− θ

0

| ≥ δ) ≤ C exp



− g

2

(δ) ε

2

C

 .

P r o o f. The proof is similar to that of Kutoyants and Lessi (1995), Theorem 3.1, and uses Theorem 1 and an exponential inequality for the Wiener random field.

The next result concerns the asymptotic law of θ

ε

as ε → 0.

Theorem 8. Assume that (H1)–(H4) are satisfied and for any δ > 0, g(δ) > 0, inf

|u|=1

hJ(θ

0

)u, ui > 0.

Then

P

θ0

- lim

ε→0

ε

−1

ε

− θ

0

) = ξ,

where P

θ0

- lim denotes the convergence with respect to the probability P

θ0

. P r o o f. The proof is essentially based on Lemma 6 and follows the same lines as that of Kutoyants and Lessi (1995), Theorem 4.3. So, we only point out the minor adaptations needed. We make the change of variable u = ε

−1

(θ − θ

0

) and put

U

θ0

= {u ∈ R

k

: θ

0

+ εu ∈ Θ}, u

ε

= arg inf

u∈Uθ0,ε

Z −

x(θ

0

+ εu) − x(θ

0

) ε

L2(µ)

. We have u

ε

= ε

−1

ε

− θ

0

).

Now, set

A

1

= {ω ∈ Ω : kX − x(θ

0

)k

L2(µ)

< inf

u∈Uθ0,ε

|u|>λε

kX − x(θ

0

+ εu)k

L2(µ)

}

(12)

where λ

ε

= ε

−δ

, δ ∈ (0, 1]. Following Kutoyants and Lessi (1995), we have u

ε

= J(θ

0

)

−1

\

RT

Z

t

x e

s

ε

) dµ(t) − J(θ

0

)

−1

[J(θ

ε

, θ

ε

)

− J(θ

0

)]u

ε

. Therefore

|u

ε

− ξ| ≤

J(θ

0

)

−1

h

\

RT

(Z

t

− Y

t

)e x

t

ε

) dµ(t)+

\

RT

(e x

t

ε

) − e x

t

0

))Y

t

dµ(t) i + |J(θ

0

)

−1

[J(θ

ε

, θ

ε

)

− J(θ

0

)]u

ε

|,

where J(θ

ε

, θ

ε

)

is equal to the matrix

T

Rt

e x

t

ε

)e x

t

ε

) dµ(t) and θ

ε

= θ

0

+ γ

t

εu, γ

t

∈ [0, 1). By Lemma 6, on A

1

we have

|u

ε

− ξ| ≤ C(ε

a

sup

t∈RT

|W

t

|

1+a

+ ε

1−δ

+ ε

a−δ

+ ε

b−δ

+ ε

c−δ

+ ε

d−δ

) + C(ε

1−δ

+ ε

a(1−δ)

+ ε

b(1−δ)

+ ε

c(1−δ)

+ ε

d(1−δ)

) sup

t∈RT

|W

t

|.

Now, put e = a ∧ b ∧ c ∧ d. Since ε ∈ [0, 1), we deduce that on A

1

we have

|u

ε

− ξ| ≤ C(ε

a

sup

t∈RT

|W

t

|

1+a

+ ε

e−δ

+ ε

e(1−δ)

sup

t∈RT

|W

t

|).

Let r = a ∧ (e − δ) and choose δ = e/(1 + a). Then on A

1

we have

|u

ε

− ξ| ≤ Cε

r/2

a−r/2

sup

t∈RT

|W

t

|

1+a

+ ε

e−δ−r/2

+ ε

e(1−δ)−r/2

sup

t∈RT

|W

t

|).

Now, set

A

2

= {ω ∈ Ω : sup

t∈RT

|W

t

(ω)| < ε

−̺

} and A = A

1

∩ A

2

where ̺ = min{(a − r/2)(1 + a)

−1

, e − δ − r/2}. Then on A we have

|u

ε

− ξ| ≤ Cε

r/2

α1

+ ε

α2

+ ε

α3

) where

α

1

= a − ̺(1 + a) − r/2 ≥ 0, α

2

= e − δ − r/2 ≥ 0, α

3

= e − δ − ̺ − r/2 ≥ 0.

Following Kutoyants and Lessi (1995), one can prove that P

θ(ε)0

(A) → 0 as ε → 0 where A = Ω \ A, which completes the proof.

Now, let us apply the above results to the two-parameter Ornstein–

Ulhenbeck process with parameter (θ

1

, θ

2

, ε) which is a random field H defined in Dozzi (1989), p. 155, by

H

t

= e

θ1t1

x + e

θ2t2

x − e

θt

x + εe

θt

\

RT

e

−θu

x dW

u

.

We remark that H

t

= x on the axes. Itˆo’s formula in Guyon and Prum

(1981), p. 634, implies that H satisfies the nonlinear hyperbolic stochastic

(13)

partial differential equation

2

H

t

∂t

1

∂t

2

= θ

1

∂H

t

∂t

2

+ θ

2

∂H

t

∂t

1

− θ

1

θ

2

H

t

+ ε ˙ W

t

, t ∈ R

2+

. Assume that θ =

θθ12

 is unknown, θ ∈ Θ, where Θ is a bounded open subset of R

2

, and put

θ

ε∗∗

= arg inf

θ∈Θ

kH − x(θ)k

L2(µ)

where

x

t

(θ) = e

θ1t1

x + e

θ2t2

x − e

θt

x.

We have

Corollary 9. (i) There exists C > 0 (independent of the compact set K) such that

sup

θ0∈K

P

θ(ε)0

(|θ

∗∗ε

− θ

0

| ≥ δ) ≤ exp



− g

2

(δ) ε

2

C

 .

(ii) P

θ0

- lim

ε→0

ε

−1

ε∗∗

− θ

0

) = ξ where ξ is a centered Gaussian random variable with covariance matrix

Γ = J(θ

0

)

−1



\

RT

\

RT

1

θ

10

θ

02

(e

θ10|t1−s1|

− e

θ01(t1+s1)

)

× (e

θ02|t2−s2|

− e

θ20(t2+s2)

)e x

t

0

)e x

s

0

) dµ(t) dµ(s)



J(θ

0

)

−1

. P r o o f. It suffices to verify that in this case (H1)–(H4) are satisfied and

Y

t

= e

θ0t

\

Rt

e

−θ0u

dW

u

,

E(Y

t

Y

s

) = 1

θ

10

θ

02

(e

θ10|t1−s1|

− e

θ01(t1+s1)

)(e

θ20|t2−s2|

− e

θ20(t2+s2)

).

Then use Remark 5 and apply Theorems 7 and 8.

References

H. D i e t z and Y. K u t o y a n t s (1992), A minimum-distance estimator for diffusion pro- cesses with ergodic properties, Tech. Report 11, Inst. Appl. Analysis and Stochastics, Berlin.

H. D i e t z and Y. K u t o y a n t s (1997), A class of minimum-distance estimators for diffu- sion processes with ergodic properties, Statistics and Decisions 15, 211–217.

M. D o z z i (1989), Stochastic Processes with a Multidimensional Parameter , Longman Sci.

Tech.

M. F a r r´e and D. N u a l a r t (1993), Nonlinear stochastic integral equations in the plane, Stochastic Process. Appl. 46, 219–239.

(14)

X. G u y o n and B. P r u m (1981), Semimartingales `a deux indices, Ph.D. Thesis, Univ. de Paris VI.

S. H´en a f f (1995), On minimum distance estimate of the parameter of the Ornstein–

Uhlenbeck process, preprint, Univ. of Angers.

H. K o r e z l i o g l u, G. M a z z i o t t o and J. S z p i r g l a s (1983), Nonlinear filtering equations for two parameter semimartingales, Stochastic Process. Appl. 15, 239–269.

Y. K u t o y a n t s (1994), Identification of Dynamical Systems with Small Noise, Kluwer, Dordrecht.

Y. K u t o y a n t s and O. L e s s i (1995), Minimum distance estimation for diffusion random fields, Publ. Inst. Statist. Univ. Paris 29, fasc. 3, 3–20.

Y. K u t o y a n t s, A. N e r c e s s i a n and P. P i l i b o s s i a n (1994), On limit distribution of the minimum sup norm estimate of the parameter of the Ornstein–Uhlenbeck process, Romanian J. Pure Appl. Math. 39, 119–139.

Y. K u t o y a n t s and P. P i l i b o s s i a n (1994), On minimum L1 estimate of the parameter of the Ornstein–Uhlenbeck process, Statist. Probab. Lett. 20, 117–123.

J. N o r r i s (1995), Twisted sheets, J. Funct. Anal. 132, 273–334.

C. R o v i r a and M. S a n z - S o l´e (1995), A nonlinear hyperbolic SPDE : Aproximations and support, in: London Math. Soc. Lecture Note Ser. 216, Cambridge Univ. Press, 241–261.

C. R o v i r a and M. S a n z - S o l´e (1996), The law of the solution to a nonlinear hyperbolic SPDE, J. Theoret. Probab. 9, 863–901.

Vincent Monsan and Modeste N’zi Universit´e de Cocody

UFR de Math´ematiques et Informatique Equipe de Probabilit´es et Statistique BP 582 Abidjan 22, Cˆote d’Ivoire E-mail: monsanv@ci.refer.org

nziy@ci.refer.org

Received on 17.8.1998;

revised version on 10.6.1999

Cytaty

Powiązane dokumenty

This article derives formulas which express vertex eccentricity and ra- dius of an n-fold tensor product in terms of invariants of its factors.. More recently, Abay-Asmerom and

In the case where the formal solution diverges a precise rate of divergence or the formal Gevrey order is specified which can be interpreted in terms of the Newton polygon as in

The function l//(z) gives the conformal mapping of the unit disc onto the exterior of the segment Ё, the origin О of the coordinate system, lies in the distance c from the middle

This is a survey of recent work on bounds for the hyperbolic distance ho in terms of a similarity invariant metric jo and the Mobius invariant Apollonian metric a d.. Both of

Thus, it is recommended that the class of direct estimators proposed in this article for the estimation of domain mean using proper auxiliary information have substantial utility

In this note we consider a certain class of convolution operators acting on the L p spaces of the one dimensional torus.. We prove that the identity minus such an operator is

Theorem 2.. One cannot hope to prove a statement similar to Theorem 2 for purely atomic measures. The main difficulty is that this operator is not globally invertible. [G], [Ba]

In order to study the existence of solutions we replace these two problems with their multivalued approximations and, for the first problem, we estabilish an existence result while