• Nie Znaleziono Wyników

Asymptotic normality of the $L_1$ error of the Grenander estimator

N/A
N/A
Protected

Academic year: 2021

Share "Asymptotic normality of the $L_1$ error of the Grenander estimator"

Copied!
32
0
0

Pełen tekst

(1)

ASYMPTOTIC NORMALITY OF THE L ERROR OF THE1 GRENANDER ESTIMATOR

BY PIET GROENEBOOM, GERARD HOOGHIEMSTRA AND

HENDRIK P. LOPUHAA

¨

Delft University of Technology

Ž

Groeneboom introduced a jump process that can be used among other .

things to study the asymptotic properties of the Grenander estimator of a monotone density. In this paper we derive the asymptotic normality of a suitably rescaled version of the L error of the Grenander estimator, using1

properties of this jump process.

w x

1. Introduction. Let f be a decreasing density with support 0, 1 . Denote by F the empirical distribution function of a sample X , . . . , X fromn 1 n

ˆ

w x

f. Let F be the concave majorant of F on 0, 1 , by which we mean then n

smallest concave function such that

ˆ

w

x

ˆ

ˆ

F tn

Ž .

G F t ,n

Ž .

tg 0, 1 and F 0n

Ž .

s 0, F 1n

Ž .

s 1.

ˆ

ˆ

The Grenander estimator fn is defined as the left derivative of F .n

ˆ

Ž .

In Groeneboom 1985 the asymptotic behavior of fn was investigated.

ˆ

 Ž . Ž .4

Instead of studying the process f t , tn g 0, 1 itself, the more tractable Žinverse process U a : a.  nŽ . g f 1 , f 0w Ž . Ž .x4 was studied, where U a is definednŽ .

Ž .

as the last time that the process F tn y at attains its maximum,

w

x

1.1 U a s sup t g 0, 1 : F t y at is maximal .



4

Ž

.

n

Ž .

n

Ž .

A new proof, based on the inverse process U , was given of a result inn

ˆ

Ž .

Prakasa Rao 1969 on pointwise weak convergence of f . In Groeneboomn

Ž1985 also analytical properties of the weak limit of the locally rescaled. Ž .

process U an were discussed and it was indicated how the process Un

together with a Hungarian embedding technique could be used to prove asymptotic normality of the L error1

1

ˆ

ˆ

5 5 1.2 f y f s f t y f t dt.

Ž

.

n 1

H

n

Ž .

Ž .

0 Ž .

The analytical properties of the limit process a¬ V a were made rigorous

Ž .

in Groeneboom 1989 and at the same time it was mentioned that a rigorous treatment of the asymptotic normality of the L error would appear else-1

where. This paper fulfills that promise.

Received March 1998; revised June 1999.

AMS 1991 subject classifications. Primary 62E20, 62G05; secondary 60J65, 60J75.

Key words and phrases. Brownian motion with quadratic drift, central limit theorem, concave

majorant, isotonic estimation, jump process, L -norm, monotone density.1 1316

(2)

We feel that this result is important, since the problem of estimating a

Ž .

monotone density is closely related to several other inverse problems, for example, estimation of the distribution function of interval censored data wsee, e.g., Groeneboom and Wellner 1992Ž .x and estimation of a monotone hazard function, and since the result was referred to by several authors; see,

wŽ . x wŽ .

for instance, Devroye and Gyorfi 1985 , pages 213 and 214 , Devroye 1987 ,

¨

x Ž . Ž . Ž .

page 145 , Csorgo and Horvath 1988 , Birge 1989 and Wang 1992 . Re-

¨ ˝

´

´

cently, the result has been taken up again in the context of nonparametric

Ž . Ž .

regression; see Durot 1996 . In fact, the methods used by Durot 1996 , whose work was done independently, are closer in spirit to the methods

Ž .

suggested in Groeneboom 1985 than our present paper, which relies on

Ž .

ideas developed in Groeneboom 1989 . In both settings, the proof relies heavily on the fact that Brownian motion has independent increments. One of

Ž .

the main differences between the model considered in Durot 1996 , and the present paper is that in the regression setting one can make a direct embedding into Brownian motion, whereas in our case we can only make such an embedding into the Brownian bridge, and we need rather delicate

Ž

arguments to make the transition to Brownian motion Corollary 3.3 in the .

present paper .

The main result can be stated as follows. Define

2

1.3 V c s sup t: W t y t y c is maximal ,

Ž

.

Ž .



Ž .

Ž

.

4

 Ž . 4

where W t :y⬁ - t - ⬁ denotes standard two-sided Brownian motion on ⺢

w Ž . x

originating from zero i.e., W 0 s 0 .

Ž .

THEOREM 1.1 Main theorem . Let f be a twice differentiable decreasing

w x density on 0, 1 , satisfying: ŽA1 0. - f 1 F f t F f s F f 0 - ⬁, for 0 F s F t F 1;Ž . Ž . Ž . Ž . ŽA2 0. - inftg Ž0, 1.< Ž .<f⬘ t F suptg Ž0, 1.< Ž .<f⬘ t - ⬁; ŽA3 sup. tg Ž0, 1.F f⬙ t - ⬁.< Ž .< 1 1r3 1 < Ž .< < Ž . Ž .< Then with ␮ s 2 E V 0 H f⬘ t f t0 2 dt, 1 1r6 1r3

ˆ

n

½

n

H

fn

Ž .

t y f t dt y

Ž .

5

0

converges in distribution to a normal random variable with mean zero and

2 ⬁ Ž< Ž .< < Ž . <.

variance␴ s 8H cov V 0 , V c y c dc.0

Ž .

Actually, this is precisely the theorem, as stated in Groeneboom 1985 Žwith the same conditions . In that paper, however, a sketch of proof of two. pages was given, whereas, unfortunately, we need a lot more pages to write

Ž

down all the details an experience shared with Cecile Durot in her work on

´

.

the regression problem. The difficulty in proving a result of this type stems from the fact that the Grenander estimator is a nonlinear functional of the empirical distribution function. For this reason methods of proof are needed

(3)

that are very different from those used in, for example, Csorgo and Horvath

¨ ˝

´

Ž1988 , where the linearity of the kernel estimators is used in an essential. way. In Section 2 we show Ž . f 0 y1r2

ˆ

1.4 f y f s U a y g a da q o n ,

Ž

.

n 1

H

n

Ž .

Ž .

p

Ž

.

Ž . f 1 Ž .

where g denotes the inverse of f see Corollary 2.1 . In this section we also

EŽ .

obtain an exponential upper bound for the tail probabilities of Vn a s

1r3Ž Ž . Ž ..

n U an y g a .

EŽ . Ž

In Section 3 the process a¬ V a is approximated using Hungariann

. BŽ .

embedding by a process a¬ V a , defined for the Brownian bridge. Then

B WŽ .

process Vn is in turn approximated by a similar process a¬ Vn a , defined for Brownian motion. A key tool for the results in this section is Lemma 3.4, showing that the probability of a jump of VnB and VnW in an interval of length hny1r3 is of order h, if h is not too small. We suspect that the restriction ‘‘not too small’’ is actually not needed, but this restriction arises naturally in the present approach. The methods in this section are motivated by results that hold in the ‘‘canonical setting’’ of the process V, studied in Groeneboom Ž1989 ..

Another key observation that makes things work in Section 3 is that, although we cannot construct a Brownian motion and a Brownian bridge

w x

which are close in the supremum distance on 0, 1 , we have that, if W F t

Ž

Ž .

.

s B F t q

Ž

Ž .

.

␰ F t ,

Ž .

w x

where B is the Brownian bridge on 0, 1 , and ␰ is a standard normal random variable, independent of B, the associated processes of locations of maxima VB and VW, defined for B( F and W ( F, respectively, are very close indeed.

n n

The results in Section 3 imply that it is sufficient to prove that

Ž . f 0 1r6 W W n

H

Ž

Vn

Ž .

a y E Vn

Ž .

a

.

da Ž . f 1

tends in distribution to a normal distribution with expectation 0 and variance ␴2, where 2 is given in Theorem 1.1. In Section 3 the process VW is also

n

shown to be strongly mixing. This leads to a central limit theorem which is proved in Section 4 by using Bernstein’s method of big blocks and small

Ž . Ž .

blocks. Throughout, it will be assumed that conditions A1 to A3 hold.

2. Localization. In this section we show that the distributions of the random variables

2.1 VE a s n1r3 U a y g a

Ž

.

n

Ž .

Ž

n

Ž .

Ž .

.

have exponentially fast decreasing tails. This will enable us to compare the process Un locally with a similar process, defined for the Brownian bridge.

(4)

For sF t, we use the following abbreviations: Fn

Ž

s, t

.

s F t y F s ,n

Ž .

n

Ž .

F s, t

Ž

.

s F t y F s .

Ž .

Ž .

w Ž . Ž .x Ž .

LEMMA 2.1. Let ag f 1 , f 0 and let t s g a . Then0

F t , tn

Ž

0

.

f t

Ž

0

.

xny1r3 E P V



n

Ž .

a ) x F P

4

½

sup G y1r3

5

, F t , t

Ž

.

F t , t q xn y1r3 0

Ž

0 0

.

w x tg t qxn0 , 1

for each x such that t0- t q xn0 y1r3F 1, and

F t , tn

Ž

0

.

f t

Ž

0

.

xny1r3 E P V



n

Ž .

a - yx F P

4

½

inf F y1r3

5

, y1r3 F t , t

Ž

.

F t y xn , t w x

Ž

.

tg 0, t yxn0 0 0 0

for each x such that 0F t y xn0 y1r3- t .0

PROOF. For each x, such that t0- t q xn0 y1r3F 1, we have P V



nE

Ž .

a ) x F P F t , t y a t y t G 0,

4



n

Ž

0

.

Ž

0

.

2.2

Ž

.

y1r3

for some tg t q xn

Ž

0 , 1 ,

4

and for each x such that 0F t y xn0 y1r3- t ,0

P VE a - yx F P F t, t y a t y t F 0,

Ž .

Ž

.

Ž

.



n

4



n 0 0 2.3

Ž

.

y1r3 for some tg 0, t y xn0

.

4

. Ž .

The probability on the right-hand side of 2.2 can be written as

F t , tn

Ž

0

.

f t

Ž

0

. Ž

ty t0

.

y1r3

2.4 P G , for some tg t q xn , 1 .

Ž

.

½

Ž

0

5

F t , t

Ž

0

.

F t , t

Ž

0

.

Since the function

f t

Ž

0

. Ž

ty t0

.

␥ t s

Ž .

F t , t

Ž

0

.

Ž . Ž . Ž .

is increasing for tg t , 1 using the monotonicity of f , it follows that 2.40

is bounded above by F t , tn

Ž

0

.

f t

Ž

0

.

xny1r3 P

½

sup G y1r3

5

. F t , t

Ž

.

F t , t q xn y1r3 0

Ž

0 0

.

Ž x tg t qxn0 , 1 Ž .

Similarly, the probability on the right-hand side of 2.3 can be bounded from above by F t , tn

Ž

0

.

f t

Ž

0

.

xny1r3 P

½

inf F y1r3

5

. I y1r3 F t , t

Ž

.

F t y xn , t w .

Ž

.

tg 0, t yxn0 0 0 0

(5)

To bound the probabilities given in Lemma 2.1, we will apply Doob’s inequality to suitably chosen martingales. These martingales are given in the next lemma.

LEMMA 2.2. Let 0F t F 1. Consider, for n fixed, the processes0 F t , tn

Ž

0

.

t¬ M1 n

Ž .

t s , tg t , 1

Ž

0 F t , t

Ž

0

.

and F t , tn

Ž

0

.

w

t¬ M2 n

Ž .

t s , tg 0, t .0

.

F t , t

Ž

0

.

 Ž . w x4  Ž . w x.

Let FFss␴ F t : t g s, 1 and Gn Gss␴ F t : t g 0, s . Then, conditionallyn

Ž .

on F t , the process Mn 0 1 n is a reverse time martingale with respect to the

 Ž x4

filtration FF : ss g t , 1 and M0 2 n is a forward time martingale with respect

 w .4

to the filtration GG : ss g 0, t .0

Ž . Ž .

PROOF. Note that conditionally on F tn 0 and F t , s , for tn 0 0- t - s - 1,

Ž .

the random variable nF t , t has a binomial distribution with parametern 0

Ž . Ž . Ž .

nF t , s and probability of success pn 0 s F t , t rF t , s . This shows that for0 0 t- s,

F t , t

Ž

0

.

E0 F t , tn

Ž

0

.

¬ FF s F t , ss n

Ž

0

.

,

F t , s

Ž

0

.

Ž . w Ž .x

where E0 ⭈ s E ⭈¬ F t . This implies that for t - t - s - 1, we have thatn 0 0

E0 M1 n

Ž .

t ¬ FF s Ms 1 n

Ž .

s .

Ž . Ž .

Similarly, conditionally on F tn 0 and F s, t , for 0n 0 - s - t - t , the random0

Ž . Ž .

variable nF t, tn 0 has a binomial distribution with parameters nF s, tn 0 and

Ž . Ž .

ps F t, t rF s, t . This leads to0 0

E0 M2 n

Ž .

t ¬ GGs s M2 n

Ž .

s . I

We have the following bounds for the martingales in Lemma 2.2.

Ž . w .

LEMMA2.3. Let h y s 1 y y q y log y, y ) 0. Then, for t g 0, 1 , y G 10 and ␦ ) 0 such that t q ␦ - 1,0

P

½

sup M1 n

Ž .

t G y F exp ynF t , t q

5



Ž

0 0 ␦ h y

.

Ž

.

4

w x

tg t q␦ , 10

Ž x

and for t0g 0, 1 , 0 - y F 1 and ␦ ) 0 such that t y ␦ ) 0,0 P

½

inf M2 n

Ž .

t

.

F y F exp ynF t y

5



Ž

0 ␦ , t h y .0

.

Ž

.

4

w x

tg 0, t y␦0

PROOF. We start with the proof of the first inequality. According to

Ž .

(6)

 Ž .4

exp rM1 n t is a reverse time submartingale. Hence, by Doob’s inequality, P

½

sup M1 n

Ž .

t G y s E P

5

½

sup M1 n

Ž .

t G y F tn

Ž

0

.

5

w x w x tg t q␦ , 10 tg t q␦ , 10 r y s E P

½

sup exp rM

Ž

1 n

Ž .

t

.

G e F tn

Ž

0

.

5

w x tg t q␦ , 10 yr y F E e E exp rM

Ž

Ž

1 n

Ž

t0q␦

.

.

F tn

Ž

0

.

.

s eyr yE exp rM

Ž

Ž

t q␦ .

.

.

1 n 0 Ž .

Using the fact that nF t , tn 0 0q␦ has a binomial distribution with

parame-Ž .

ters n and ps F t , t q0 0 ␦ , we see that the last expression is equal to

n

yr y rr n p yr y rr n p yn phŽ y.

e

Ž

1q p e

Ž

y 1

.

.

F e exp np e

Ž

Ž

y 1 s e

.

.

, by putting rs np log y in the last equality. This proves the first exponential bound.

Ž x

For the proof of the second inequality we note that, for yg 0, 1 , P

½

inf M2 n

Ž .

t F y s E P

5

½

sup y M2 n

Ž .

t G yy F tn

Ž

0

.

5

w x tg 0, t y␦0 tg 0, t y␦w 0 x r y < F E e E exp yrM

Ž

Ž

2 n

Ž

t0y␦

.

.

F tn

Ž

0

.

.

s er yE exp yrM t y␦ ,

Ž

.

Ž

2 n 0

.

Ž .

where again Doob’s inequality is used. Taking ps F t y0 ␦, t0 and rs ynp log y, we get

er yE exp yrM t y␦ F exp ynph y . I

Ž

.

Ž

Ž

.

.

Ž

2 n 0

.

Ž .

REMARK. The function y¬ h y , used in Lemma 2.3, but also in the

sequel, is a well-known function in large deviation theory. It is nonnegative

Ž . Ž .

and convex on 0,⬁ . Its minimum 0 is attained at y s 1. Actually h y s H1y log u du, y) 0.

We are now ready to prove the following theorem.

EŽ . Ž .

THEOREM 2.1. Let Vn a be defined by 2.1 . Then there exists a constant w Ž . Ž .x C) 0, only depending on f, such that for all n G 1, a g f 1 , f 0 and x) 0,

3

E yC x

P V



n

Ž .

a ) x F 2 e

4

.

PROOF. We will write ␦ s xnn y1r3. First consider the probability

2.5 P VE a ) x .

Ž

.



n

Ž .

4

Ž .

If g a q␦ G 1, this probability is zero, in which case there is nothing ton Ž .

(7)

Let

f t

Ž

0

.

n

yns ,

F t , t

Ž

0 0q␦n

.

Ž .

where t0s g a . Note that y ) 1, since f is strictly decreasing. We alson

Ž . have, using assumption A1 ,

f t

Ž

0

.

n f t

Ž

0

.

f 0

Ž .

yns F F - ⬁.

F t , t

Ž

0 0q␦n

.

f t

Ž

0q␦n

.

f 1

Ž .

Hence 1- y - c , for a constant c ) 0, independent of x such thatn 1 1

Ž .

t0q␦ - 1. By Lemma 2.1, the probability in 2.5 is bounded above byn P

½

sup M1 n

Ž .

t G y .n

5

w x

tg t q␦ , 10 n

According to Lemma 2.3 this probability is bounded by

2.6 exp



ynF t , t q␦ h y

4

.

Ž

.

Ž

0 0 n

.

Ž

n

.

Using a Taylor expansion with a Lagrangian remainder term of the convex Ž . function u¬ h u at u s 1, we get 2 2 1 1 y1 2.7 h y s h⬙y y 1 G c y y 1 ,

Ž

.

Ž

n

.

2

Ž

n

. Ž

n

.

2 1

Ž

n

.

where 1F␰ F c . However,n 1 ␦ infn ug Ž0, 1. f⬘ u

Ž

.

<yny 1 G< , 2 f 0

Ž .

Ž . and hence, by 2.7 , h y

Ž

n

.

G c2␦n2,

for a constant c2) 0, independent of x such that t q0 ␦ - 1. Sincen

Ž . Ž . Ž .

F t , t0 0q␦ G f 1 ␦ , it now follows that 2.6 is bounded above byn n

Ž 3.

expyCx .

Now consider the probability

2.8 P VE a - yx .

Ž

.



n

Ž .

4

Ž . y1r3

If g a y xn F 0, this probability is zero, so we can restrict ourselves to

Ž . y1r3

consider an x) 0 such that g a y xn ) 0. Define f t

Ž

0

.

n

yns .

F t

Ž

0y␦ , tn 0

.

The fact that f is strictly decreasing, this time implies that yn- 1. Using

Ž .

Lemma 2.1, it is seen that 2.8 is bounded above by P

½

inf M2 n

Ž .

t F y ,n

5

w x

tg 0, t y␦0 n

which, by Lemma 2.3, leads to the upper bound exp



ynf 1

Ž .

␦ h yn

Ž

n

.

4

.

(8)

Ž . Ž x We have, using h⬙ x G 1, x g 0, 1 ,

2 2

1 1

h y

Ž

n

.

s h⬙2

Ž

n

. Ž

yny 1 G

.

2

Ž

yny 1 ,

.

where in this case 0-␰ F 1. Following the same line of argument as above,n

 34

we get the upper bound expyCx . I

Lemma 2.3 also enables us to show that the difference between the L risk1

Ž .

in 1.2 and the integral

Ž . f 0 U a

Ž .

y g a da,

Ž .

H

n Ž . f 1 Ž y1r2. defined in terms of the inverse process, is o np .

ˆ

COROLLARY 2.1. Let fn be the Grenander estimator and let U be definedn

Ž . in 1.1 . Then 1 f 0Ž . y2r3

ˆ

2.9 f t y f t dt y U a y g a da s O n .

Ž

.

H

n

Ž .

Ž .

H

n

Ž .

Ž .

p

Ž

.

Ž . 0 f 1 Ž .

PROOF. The difference on the left-hand side of 2.9 can be written as

q q 1 1

ˆ

ˆ

f

Ž .

t y f 0

Ž .

dtq f 1

Ž .

y f t

Ž .

dt ,

H

n

H

n 0 0 q Ž . Ž y2r3.

where x s max 0, x , x g ⺢. We will show that the first term is O np . The second term can be treated similarly.

We have that q 1

ˆ

f

Ž .

t y f 0

Ž .

dt

H

n 0 Ž Ž .. Un f 0

ˆ

s

H

Ž

fn

Ž .

t y f 0 dt s F U f 0

Ž .

.

n

Ž

n

Ž

Ž .

.

.

y f 0 U f 0

Ž .

n

Ž

Ž .

.

0 s F U f 0n

Ž

n

Ž

Ž .

.

.

y F U f 0

Ž

n

Ž

Ž .

.

.

q F U f 0

Ž

n

Ž

Ž .

.

.

y f 0 U f 0 .

Ž .

n

Ž

Ž .

.

According to Theorem 2.1, for the second difference on the right-hand side we have 2 1 < < y2r3 2.10 F U f 0 y f 0 U f 0 F sup f⬘ U f 0 s O n .

Ž

.

Ž

n

Ž

Ž .

.

.

Ž .

n

Ž

Ž .

.

2 n

Ž

Ž .

.

p

Ž

.

Ž Ž Ž ... Ž Ž Ž ... y1r3

Let Zns F U f 0 y F U f 0n n n and ␦ s nn log n. Then write Zns Z 1n U Ž f Ž0..n )␦ 4n q Z 1n U Ž f Ž0..n F␦ 4n .

Then, according to Theorem 2.1,

3

< <

(9)

Hence by the Markov inequality we can conclude that

2.11 Z 1 s o ny2r3 .

Ž

.

n U Ž f Ž0..n )␦ 4n p

Ž

.

Ž .

Let Bn be a sequence of Brownian bridges given by the Hungarian

embed-1r2Ž . w Ž .x

ding approximating n Fny F cf. Komlos, Major and Tusnady 1975 .

´

´

Then y1r2 y1 <Z 1n< U Ž f Ž0..n F␦ 4n F n sup Bn

Ž .

t q O np

Ž

log n .

.

w Ž .x tg 0, Fn d Ž . Ž . Ž .

Since B tn s W t q tW 1 , where W denotes Brownian motion, the

right-hand side can be bounded by a random variable that has the same distribu-tion as y1r2 y1r2 y1 n sup W t

Ž .

q n F

Ž

n

.

W 1

Ž .

q O np

Ž

log n .

.

w Ž .x tg 0, Fn Ž .< Ž .< Ž .

Note that F ␦ W 1 s O ␦ . Furthermore, since for any ␧ ) 0,n p nP

½

sup W t

Ž .

)␧ F 4P W 1 G

5

½

Ž .

1r2

5

, F

Ž

.

w Ž .x tg 0, Fn n we have that y1r2 y2r3 n sup W t

Ž .

s o np

Ž

.

, w Ž .x tg 0, Fn Ž y2r3. Ž . Ž .

which implies that Z 1n U Ž f Ž0..n F␦ 4n s o np . Together with 2.10 and 2.11 this proves that

q 1 y2r3

ˆ

f

Ž .

t y f 0

Ž .

dts O n

Ž

.

. I

H

n p 0

3. Brownian motion approximation. In this section we show that it is sufficient to prove Theorem 1.1 for a similar process, with Brownian motion replacing the empirical process. Let E denote the empirical processn

E

'

n FŽ ny F and let V a be defined as in 2.1 . Then we have, for fixed. n Ž . Ž . Ž Ž . Ž .. ag f 1 , f 0 , 3.1 VE a s argmax DE a, t y n1r3at ,

Ž

.

n

Ž .



n

Ž

.

4

t E Ž .

where t¬ D a, t is the drifting empirical processn

DE a, t s n1r6 E g a q ny1r3t y E g a

Ž

.



Ž

Ž .

.

Ž

Ž .

.

4

n n n

q n2r3



F g a

Ž .

q ny1r3t y F g a

Ž

Ž .

.

4

,

Ž

.

and where the argmax is taken over all values of t such that

y1r3

w

x

g a

Ž .

q n tg 0, 1 .

Here the argmax function is the supremum of the times at which the Ž

maximum is attained in order to have a well-defined functional also on sets .

(10)

Let Brownian bridge B and the uniform empirical process En n( Fy1 be constructed on the same probability space via the Hungarian embedding of

Ž .

Komlos, Major and Tusnady 1975 . Let

´

´

3.2 VB a s argmax DB a, t y n1r3at ,

Ž

.

n

Ž .



n

Ž

.

4

t where DB a, t s n1r6 B F g a q ny1r3t y B F g a

Ž

.



Ž

Ž

Ž .

.

.

Ž

Ž

Ž .

.

.

4

n n n q n2r3 F g a q ny1r3t y F g a .

Ž .

Ž .



Ž

.

Ž

.

4

3.3

Ž

.

Ž . EŽ . BŽ .

Then 3.1 suggests that Vn a is close to Vn a . We will show that this is indeed the case. We define versions W of Brownian motion byn

w

x

3.4 W t s B t q␰ t, tg 0, 1 ,

Ž

.

n

Ž .

n

Ž .

n

where ␰ is a standard normal random variable, independent of B . More-n n

over, let 3.5 VW a s argmax DW a, t y n1r3at ,

Ž

.

n

Ž .



n

Ž

.

4

t where DnW

Ž

a, t

.

s n1r6



W F g an

Ž

Ž

Ž .

q ny1r3t

.

.

y W F g an

Ž

Ž

Ž .

.

.

4

q n2r3



F g a

Ž .

q ny1r3t y F g a

Ž

Ž .

.

4

.

Ž

.

3.6

Ž

.

B Ž . WŽ . EŽ .

Note that Vn a and Vn a are defined in the same way as Vn a , but with En replaced by Bn( F and W ( F, respectively. For J s E, B, W, then

J

Ž .

argmax Vn a can be seen as the t-coordinate of the point that is touched first

1r3 J

Ž .

when dropping a line with slope n a on the process t¬ D a, t . Further-n Ž Ž . Ž ..

more, note that for every fixed a, bg f 1 , f 0 , we have the following property:

3.7 VJ b q n1r3 g b y g a s argmax DJ a, t y n1r3bt ,

Ž

.

n

Ž

.

Ž

Ž

.

Ž .

.



n

Ž

.

4

t

where as before the argmax is taken over values of t such that

y1r3

w

x

g a

Ž .

q n tg 0, 1 .

Ž .

Hence 3.7 is the t-coordinate of the point that is touched first when

1r3 J

Ž .

dropping a line with slope n b on the process t¬ D a, t . Moreover, noten that

c¬ VJ

Ž .

c q n1r3

Ž

g c

Ž .

y g a jumps at b

Ž .

.

n

3.8

Ž

.

if and only if c¬ VJ c jumps at b.

Ž .

n B

Ž . WŽ .

We have the following results for Vn a and Vn a , analogous to Theorem 2.1.

B

Ž . WŽ . Ž . Ž .

THEOREM 3.1. Let Vn a and Vn a be defined by 3.2 and 3.5 , respec-tively. Then there exist a constant C) 0, only depending on f, such that for

Ž Ž . Ž ..

all nG 1, a g f 1 , f 0 and x ) 0,

W 3 B 3

(11)

Ž Ž . Ž .. Ž .

PROOF. Let ag f 1 , f 0 and let t s g a . We first consider0

P VW a ) x .

Ž .



n

4

If t0q xny1r3G 1, this probability is zero, so we may assume t q xn0 y1r3- 1. WŽ .

Let the process t¬ Xn a, t be defined by

XW a, t s n1r6 W F g a q ny1r3t y W F g a ,

Ž

.



Ž

Ž

Ž .

.

.

Ž

Ž

Ž .

.

.

4

n n n 1r3 tg 0, n

Ž

1y g a

Ž .

.

, 3.9

Ž

.

and let, for rg ⺢, the process Y be defined byn W exp rX

Ž

n

Ž

a, t

.

.

1r3 3.10 Y t s , tg 0, n 1y t .

Ž

.

n

Ž .

E exp rXW a, t

Ž

0

.

Ž

.

Ž

n

.

W Ž .

Then Y is a martingale with respect to the filtration induced by tn ¬ Xn a, t and

1

W 2 1r3 y1r3

E exp rX

Ž

n

Ž

a, t

.

.

s exp r n F t , t q n



2

Ž

0 0 t

.

4

. We now define the stopping time␶ byn

1r3 W

␶ s inf t g x, nn



Ž

1y t0

.

: Zn

Ž

a, t

.

G 0 ,

4

W

Ž . WŽ . 1r3 W Ž . WŽ .

where Zn a, t s D a, t y n at, with D defined in 3.6 . If Z a, t - 0n n n w 1r3Ž .x

for all tg x, n 1y t , we define0 ␶ s ⬁. By the optional stopping theo-n

w Ž . x

rem cf. Rogers and Williams 1997 , page 189 we have EY ␶ n n1r3 1y t s EY 0 s 1.

Ž

.

Ž .

Ž

.

n n 0 n

On the other hand,

EY ␶ n n1r3 1y t G EY ␶ 1

Ž

.

Ž

.

Ž

.

n n 0 n n ␶ -⬁4n G E exp yn



2r3rF t , t q ny1r3␶ q n1r3ra

Ž

0 0 n

.

n 1 2 1r3 y1r3 y r n F t , t q n2

Ž

0 0n

.

4

1␶ -⬁4 n G E exp c r␶2y c r2␶ 1 ,



1 n 2 n

4

␶ -⬁4n 1 < Ž .< 1 Ž . Ž .

where c1s inf2 tg Ž0, 1. f⬘ t and c s f 0 . If we take r s c xr 2c2 2 1 2 and

2 Ž . Cs c r 4c , we conclude that1 2 1s EY

Ž

␶ n n1r3

Ž

1y t

.

.

G E exp Cx␶ 2␶ y x 1



Ž

.

4

n n 0 n n ␶ -⬁4n



3

4



4

G exp Cx P ␶ - ⬁ .n Hence we find W W



4



3

4

P V



n

Ž .

a ) x F P

4

½

sup Zn

Ž

a, t

.

G 0 s P

5

␶ - ⬁ F exp yCx .n

1r3

w Ž .x

tg x , n 1yt0

For the opposite inequality we note that

P V



nW

Ž .

a - yx F P

4

½

sup ZnW

Ž

a,yt G 0 .

.

5

1r3

w x

(12)

This can be bounded in the same way as before, by introducing the stopping time

1r3 W

␶ s inf t g x, n t : Z



Ž

a,yt G 0 ,

.

4

˜

n 0 n

and applying the optional stopping argument to the backward time martin-gale W exp rX

Ž

n

Ž

a,yt

.

.

1r3

˜

Y tn

Ž .

s E exp rXW a,yt , tg 0, n t0 .

Ž

.

Ž

n

.

Ž .

For the argmax associated with the Brownian bridge we have with 3.4 , VB a s argmax ZW a, t y n1r6F t , t q ny1r3t ␰ .

Ž .



Ž

.

Ž

.

4

n n 0 0 n t 1 Ž . < Ž .<

Now choose␦ ) 0 in such a way that ␦ f 0 - inf4 tg Ž0, 1. f⬘ t , and note that for x- n1r3, 1 1 1r6 2 1r3 2 2 3 < < P



␰ ) ␦ nn x

4

F exp y



2␦ n x

4

F exp y



2␦ x .

4

Hence P VB a ) x

Ž .



n

4

F P sup ZW

Ž

a, t

.

q␦ xn1r3F t , t q ny1r3t G 0

Ž

.

Ž

n 0 0

.

½

tg x , nw 1r3Ž1yt .x

5

0 1 2 3 q exp y

Ž

2␦ x

.

X 1 W 2 2 3 F P

½

sup

Ž

Xn

Ž

a, t

.

y c t G 0 q exp y1

.

5

Ž

2␦ x ,

.

1r3 w Ž .x tg x , n 1yt0 X 1 < Ž .<

with c1s inf4 tg Ž0, 1. f⬘ t . Repeating the above optional stopping argument with␶ replaced by the stopping timen

X 1r3 W X 2

3.11 ␶ s inf t g x, n 1y t : X a, t y c t G 0 ,

Ž

.

n



Ž

0

.

n

Ž

.

1

4

the first probability in the last expression is bounded from above by

Ž 3. Ž X.2 Ž .

expyC⬘x , where C⬘ s c r 4c , with c as before. It follows that1 2 2 P V



nB

Ž .

a ) x F 2 e

4

yC x3,

for all x) 0 and some C ) 0, only depending on f. Similarly,

X 1 B W 2 2 3 P V



n

Ž .

a - yx F P

4

½

sup

Ž

Xn

Ž

a,yt y c t G 0 q exp y

.

1

.

5

Ž

2␦ x .

.

1r3 w x tg x , n t0  BŽ . 4

The bound on P Vn a - yx is obtained by using the stopping time

X 1r3 W X 2

␶ s inf t g x, n t : X



Ž

a,yt y c t G 0

.

4

˜

n 0 n 1

and applying the optimal stopping argument to the backward time

martin-˜

Ž .

(13)

W

Ž . REMARK 3.1. Theorem 3.1 for Vn holds more generally. Let L a be then

W

Ž . Ž . W

location of the maximum of the process t¬ Xn a, t y ⌬ a, t , where X isn n

Ž . Ž . 2 w 1r3Ž Ž ..x

defined in 3.9 and ⌬ a, t G c t , uniformly for t g 0, nn 1 t0k 1 y t0 . By the same argument as in the proof of Theorem 3.1, it follows that

< Ž .< 4 Ž 3.

P L an ) x F 2 exp yCx , where C only depends on c .1

J

Ž . The following theorem shows that properly normalized versions of Vn a

Ž . Ž Ž . Ž ..

converge in distribution to a centered version of 1.3 . For ag f 1 , f 0 , let Jn

Ž .

a s c: a y



␾ a cn2

Ž .

y1r3g f 1 , f 0

Ž

Ž .

Ž .

.

4

,

Ž .

and for Js E, B, W and c g J a , we define,n

3.12 VJ c s␾ a VJ ay␾ a cny1r3 ,

Ž

.

n , a

Ž .

1

Ž .

n

Ž

2

Ž .

.

where 2r3 f⬘ g a

Ž

Ž .

.

␾ a s1

Ž .

4 a 1r3 ) 0,

Ž

.

1r3 1r3 ␾ a s 4a2

Ž .

Ž

.

f⬘ g a

Ž

Ž .

.

) 0. For cg ⺢, let 3.13 ␰ c s V c y c,

Ž

.

Ž .

Ž .

Ž . Ž . with V c defined in 1.3 . Ž Ž . Ž .. Ž .d

THEOREM 3.2. For Js E, B, W, d G 1, a g f 1 , f 0 and c g J a , wen

Ž J Ž . J Ž ..

have joint distributional convergence of Vn, a c , . . . , V1 n, a cd to the random

Ž Ž . Ž ..

vector ␰ c , . . . , ␰ c .1 d

WŽ . Ž .

PROOF. First consider Vn, a c in the case ds 1. Using 3.7 with b s a y

Ž . y1r3 ␾ a cn2 , we have that

˜

W W y1r3 Vn , a

Ž .

c s␾ a V1

Ž .

n

Ž

ay␾ a cn2

Ž .

.

q␾ a n1r3 g ay␾ a cny1r3 y g a ,

Ž .



Ž

Ž .

.

Ž .

4

1 2 W Ž .

is the argmax of the process t¬ Zn, a c, t , where

1r2 ␾ a1

Ž .

y1 W 1r6 y1r3 Zn , a

Ž

c, t

.

s a1r2 n

½

W F g an

ž

Ž

Ž .

q n ␾ a1

Ž .

t

.

/

yW F g an

Ž

Ž

Ž .

.

.

5

1r2 ␾ a1

Ž .

2r3 y1r3 y1 q a1r2 n

½

F g a

Ž

Ž .

q n ␾ a1

Ž .

t

.

y1 y1r3 yF g a y n

Ž

Ž .

.

a␾ a1

Ž .

t

5

q 2ct. Ž . 1r3Ž Ž Ž . y1r3. Ž ..

Note that␾ a n1 g ay␾ a cn2 y g a converges to c, as n ª ⬁. By using Brownian scaling, a simple Taylor expansion and the uniform continu-Ž . ity of Brownian motion on compacta, for each ks 1, 2, . . . and each c g J an

(14)

we have w sup Zn , a

Ž

c, t

.

y Z c, t ª 0,

Ž

.

P as nª ⬁, < <tFk where 1r2 ␾ a1

Ž .

at 2 2 Z c, t

Ž

.

s

ž

/

W

ž

/

y t y 2ct s W t y t q 2ct.

Ž

.

d

Ž .

a ␾ a1

Ž .

Ž .

Now let dG 1 and note that for t s t , . . . , t ,1 d

d W W W

˜

˜

V

Ž

c

.

, . . . , V

Ž

c

.

s argmax Z

Ž

c , t ,

.

Ž

n , a 1 n , a d

.

Ý

n , a i i t is1 d V c

Ž

.

, . . . , V c

Ž

.

s argmax Z c , t .

Ž

.

Ž

1 d

.

Ý

i i t is1 Finally, because d d d W W sup

Ý

Zn , a

Ž

c , ti i

.

y

Ý

Z c , t

Ž

i i

.

F

Ý

sup Zn , a

Ž

c , ti i

.

y Z c , t ,

Ž

i i

.

5 5tFk is1 is1 is1< <tiFk d W Ž .

we conclude that the process t¬ Ýis1Zn, a c , ti i converges in the uniform

d

Ž . W

topology on compacta to the process t¬ Ýis1Z c , t . The result for Vi i n

Ž .

follows from Theorem 2.7 in Kim and Pollard 1990 .

Ž . B

Using 3.4 we can prove the same result for Vn by repeating the above steps, since ny1r6␰ t ª 0 in probability, uniformly in t on compacta of ⺢.n

< E

Ž . BŽ .< Ž y1r2 .

Finally, by using suptg ⺢ Dn a, t y D a, t s O nn p log n , the same result follows for VE. I

n

 WŽ . We will need some independence structure for the process Un a , ag Ž Ž ..f 1 , f 0Ž ..4, where

W

'

Un

Ž .

a s argmax W F t q n F t y at .



n

Ž

Ž .

.

Ž

Ž .

.

4

w x

tg 0, 1

The mixing property of the process UnW can be argued intuitively in the

 WŽ . 4

following way. Observe that the event Un a s x is equivalent to

'

'

Wn

Ž

F x

Ž

.

.

y W F t G n F t y F x q a n x y t ,n

Ž

Ž .

.

Ž

Ž .

Ž

.

.

Ž

.

t- x,

'

'

Wn

Ž

F x

Ž

.

.

y W F t ) n F t y F x q a n x y t ,n

Ž

Ž .

.

Ž

Ž .

Ž

.

.

Ž

.

t) x. These are conditions on increments of Wn( F. Since for large M, the event <UWŽ .a y g a - nŽ .< y1r3M has a probability close to 1, we can restrict t and x

n

y1r3 Ž .

to n M-neighborhoods of g a . The mixing property then follows from the fact that Brownian motion has independent increments.

 WŽ .. Ž Ž . Ž ..4

THEOREM 3.3. The process Un a : ag f 1 , f 0 is strong mixing with

mixing function,

3.14 ␣ d s 12 exp yC nd3 ,

(15)

where the constant C1) 0 only depends on f. More specifically, for arbitrary

Ž Ž . Ž .. Ž Ž . Ž ..

ag f 1 , f 0 and a q d g f 1 , f 0 ,

sup P A

Ž

l B y P A P B F

.

Ž

.

Ž

.

␣ d ,n

Ž

.

 WŽ . Ž . 4

where the supremum is taken over all sets Ag␴ U c : f 1 - c F a andn

 WŽ . Ž .4

Bg␴ U c : a q d F c - f 0 .n

Ž Ž . Ž .. Ž .

PROOF. Let ag f 1 , f 0 be arbitrary and take f 1 - a F a F ⭈⭈⭈ F1 2

Ž .

aks a - a q d s c F c F ⭈⭈⭈ F c - f 0 and consider the events1 2 l E s UW a g A , . . . , UW a g A ,

Ž

.

Ž

.



4

1 n 1 1 n k k E s UW c g B , . . . , UW c g B ,

Ž

.

Ž

.



4

2 n 1 1 n l l

for Borel sets A , . . . , A and B , . . . , B of1 k 1 l ⺢. Note that cylinder sets of the

 WŽ . Ž . 4  WŽ .

form E and E generate the1 2 ␴-algebras ␴ U c : f 1 - c F a and ␴ U c :n n 1 1r3

Ž .4 < Ž .<

aq d F c - f 0 , respectively. Now take M s dnn 4 infug Ž0, 1. g⬘ u and consider the events

EX s E l UW a s UW a ,

Ž .

Ž .



4

1 1 n , Mn n EX s E l UW aq d s UW aq d ,

Ž

.

Ž

.



4

2 2 n , Mn n where W 1r3

'

Un , Mn

Ž .

c s argmax n



ty g c F M : W F t q n F t y ct .

Ž .

n n

Ž

Ž .

.

Ž

Ž .

.

4

By monotonicity of UW it follows that the event EX depends only on the

n 1

Ž Ž . y1r3 . Ž

increments of Brownian motion beyond time F g a y n Mn note that g

. X

is decreasing and that the event E is only dependent on the increments of2

Ž Ž . y1r3 .

Brownian motion before time F g aq d q n M . By definition of M , itn n

X X Ž Ž . Ž ..

follows that E and E are independent. Since for all a1 2 g f 1 , f 0 we have WŽ . 1r3Ž WŽ . Ž ..

that Vn a s n Un a y g a , according to Theorem 3.1, P E

Ž

1l E y P E P E2

.

Ž

1

.

Ž

2

.

F 3P UW a / UW a q 3P UW aq d / UW aq d

Ž .

Ž .

Ž

.

Ž

.



n , Mn n

4



n , Mn n

4

1r3 W s 3P n



Un

Ž .

a y g a ) M

Ž .

n

4

1r3 W q 3P n



Un

Ž

aq d y g a q d ) M

.

Ž

.

n

4

F 12 exp yCM3 ,

Ž

n

.

which proves the theorem. I

Apart from the exponential bound on the mixing function we will need the following two lemmas. The lemmas are analogous to Theorems 17.2.1 and

Ž .

17.2.2 in Ibragimov and Linnik 1971 and can be proven similarly, since in the quoted Theorems 17.2.1 and 17.2.2 the stationarity is not essential.

(16)

 WŽ . Ž . 4 LEMMA3.1. If X is measurable with respect to␴ U c : f 1 - c F a andn

 WŽ . Ž .4 Ž .

Y is measurable with respect to ␴ U c : a q d F c - f 0n d) 0 , and if < <X F C , Y F C a.s., then2 < < 3

E XY

Ž

.

y E X E Y F 4C C

Ž

.

Ž

.

2 3␣ d .n

Ž

.

 WŽ . Ž . 4

LEMMA3.2. If X is measurable with respect to␴ U c : f 1 - c F a andn

 WŽ . Ž .4 Ž .

Y is measurable with respect to ␴ U c : a q d F c - f 0n d) 0 , and

suppose that for some ␦ ) 0,

< <2q␦ < <2q␦ E X F C ,4 E Y F C ,5 then Ž . ␦r 2q␦ E XY

Ž

.

y E X E Y F C

Ž

.

Ž

.

6

Ž

␣ dn

Ž

.

.

, where C6) 0 only depends on C and C .4 5

In the following, we shall need some properties of the process V, which are

Ž . Ž .

contained in Groeneboom 1989 and Hooghiemstra and Lopuhaa 1998 .

¨

They are stated in the following lemma.

Ž . Ž . Ž .

LEMMA 3.3. Let V 0 be defined in 1.3 and for b, cg ⺢, let V c beb

defined by 2 3.15 V c s argmax W t y b t y c .

Ž

.

b

Ž .



Ž .

Ž

.

4

t Then:

Ž .i V 0 has a bounded symmetric density.Ž .

2

y1 3

Ž .ii For xª ⬁, P V 0 ) x ;< Ž .< 4 ␭ x expŽy x y3 ␬ x , where ␭, ␬ ) 0.. Žiii For h. x0, P V jumps in a y h, a q h F b Ž .4 ␤ h 1 q o 1 , where the1 Ž Ž .. constant ␤ ) 0 is independent of a.1

Ž . Ž .

PROOF. i ᎐ ii The first statement follows immediately from the

repre-Ž . Ž .

sentation for the density of V 0 given in Groeneboom 1989 . The second

Ž .

statement is Lemma 2.1 in Hooghiemstra and Lopuhaa 1998 .

¨

Žiii. Let Ahs V jumps in 0, h . Since the process c ¬ w .4 ␰ c is stationaryŽ . Ž .

and has jumps at the same points as the process c¬ V c , we have that P V jumps in a



Ž

y h, a q h s P V jumps in yh, h

.

4



Ž

.

4

<

F 2

H

P A V 0



h

Ž .

s x f

4

V Ž0.

Ž

x dx ,

.

y⬁

Ž . Ž .

where we also use the fact thatyV yc s V c . In the proof of Theorem 3.1d

Ž .

in Hooghiemstra and Lopuhaa 1998 it is derived, that

¨

< ⬁

P A V 0



h

Ž .

s x

4

g1

Ž

uq x

.

lim s 2

H

up u du

Ž

.

h g x

hx0 0 1

Ž

.

(17)

x

definitions of the functions g and p and moreover that the right-hand side1

is bounded uniformly in x. This implies that

P V jumps in a



Ž

y h, a q h F

.

4

␤1Xhq o h ,

Ž

.

hx0,

where the constant ␤1X is independent of a. By Brownian scaling we have that d y2r3 2r3 3.16 V c s b V cb ,

Ž

.

b

Ž .

Ž

.

so that P V jumps in a



b

Ž

y h, a q h F b

.

4

2r3␤1Xhq o h ,

Ž

.

hx0, Ž .

which proves iii . I

Leaving the setting of the process V, it seems intuitively clear that the processes VB and VW have the same qualitative behavior and will in

n n

Ž .

particular satisfy a property analogous to Lemma 3.3 iii . This will be proved in the following lemma.

LEMMA 3.4. Let the interval J be defined byn

2 2

y1r3 y1r3

Jns f 1 q n

Ž .

Ž

log n

.

, f 0

Ž .

y n

Ž

log n

.

.

Then there exists a constant ␤ ) 0, independent of a g J , such that for2 n

Ž .

Js B, W and for all h g 0, 1 ,

P VJ jumps in ay hny1r3, aq hny1r3 F␤ ␦ q o

Ž

.

Ž

.



n

4

2 n , h n , h Ž y1r3Ž .2. asn, hx0, wheren, hs h k n log n . W Ž .

PROOF. We first show the statement for V . Let tn 0s g a . For notational

< < convenience define for c F 1,

VW a, c s VW aq ny1r3c q n1r3 g aq ny1r3c y g a .



4

Ž

.

Ž

.

Ž

.

Ž .

n n

< WŽ .< < < 4 Ž .

Define the event Ans Vn a, c F log n, for all c F 1 . From 3.7 it follows

WŽ .

that the process c¬ Vn a, c is nonincreasing. Therefore,

P Ac F P VW a,y1 ) log n q P VW a, 1 - ylog n .



n

4



n

Ž

.

4



n

Ž

.

4

1r3< Ž y1r3. Ž .< < Ž .<

Since n g a" n y g a F supug Ž0, 1. g⬘ u , it follows from conditions ŽA1. Ž᎐ A3 and Theorem 3.1 that P A s O exp yC log n . Hence we can.  cn4 Ž Ž Ž .3. restrict ourselves to A .n

Ž Ž y1r3 .. Ž Ž .

In order to transform t¬W F t qnn 0 t into a process yªW F t qn 0 y1r3 . n y , define Hn by H y s n1r3 H F t q ny1r3y y t ,

Ž

.



Ž

Ž

.

.

4

n 0 0 1r3 1r3 yg yn F t , n

Ž

0

.

Ž

1y F t

Ž

0

.

.

, 3.17

Ž

.

(18)

W Ž .

where H is the inverse of F. Consider the process Vn as defined in 3.5 ,

Ž . Ž .

with t replaced by Hn y . Then by property 3.7 it follows that VW a, c

Ž

.

n 1r3 1r3

˜

s sup H y g yn

½

n

Ž

.

t , n0

Ž

1y t0

.

: Wn

Ž

a, y

.

yp c, y is maximal ,n

Ž

.

5

where

˜

1r6 y1r3 3.18 W a, y s n W F g a q n y y W F g a

Ž

.

n

Ž

.



n

Ž

Ž

Ž .

.

.

n

Ž

Ž

Ž .

.

.

4

and 3.19 p c, y s yn1r3yq n1r3 aq ny1r3c H y .

Ž

.

n

Ž

.

Ž

.

n

Ž

.

Ž . Ž .

Conditions A1 ᎐ A3 imply that there exists a constant K ) 0, only depend-1 ing on f, such that on A we haven

y1 W

Hn

Ž

Vn

Ž

a, c

.

.

F K log n.1 W

Ž . Ž y1r3

Suppose that the process c¬ Vn c jumps in the interval ay hn , aq

y1r3. Ž . W

Ž .

hn . Then from 3.8 it follows that the process c¬ Vn a, c has a jump

Ž . Ž .

at some c0g yh, h . This means that if we drop the function y ¬ p c , yn 0

˜

Ž .

˜

Ž .

q␤, for varying ␤ g ⺢, on the process y ¬ W a, y , it first touches W a, yn n

Ž . Ž .

simultaneously in two points y , w1 1 and y , w . Note that on the event2 2

< <

A , we have yn 1y y F 2 K log n. We first show that for each y , i s 1, 2, we2 1 i

Ž . < <

can construct a parabola that lies above p c , y for all yn 0 F K log n, and1

Ž . Ž .

that touches p c , y at y , w .n 0 i i

Ž . Ž . Ž .

To this end consider the second derivative of p c, y . Conditions A1n ᎐ A3

< <

imply that for c - 1, there exists a constant K ) 0, only depending on f,2 such that d2p c, y

Ž

.

n Y y1r3< < pn

Ž

c, y

.

s dy2 F aH⬙ F t

Ž

Ž

0

.

.



1q K n2 1q y .

4

Choose M) K and define the parabola2

3.20 ␲ c, y s cay1yq␣ y2,

Ž

.

n

Ž

.

n

1 y1r3

Ž Ž .. Ž .4

where ␣ s aH⬙ F tn 2 0 1q Mn 1q K log n . Then it follows immedi-1 < < < <

ately that for all y F K log n, c - 1 and b g ⺢,1 ␲Y b, y ) pY c, y .

Ž

.

Ž

.

n n

y1 XŽ . Ž .

If we choose b1 such that b a1 q 2␣ y s p c , y , then ␲ b , y andn 1 n 0 1 n 1

Ž . Ž .

p c , yn 0 have the same tangent at y . If we also take1 ␤ s p c , y y1 n 0 1

Ž . Ž . Ž .

␲ b , y , then it follows that the parabola ␲ b , y q ␤ lies above p c , yn 1 1 n 1 1 n 0

Ž . Ž .

and touches p c , y at y . This implies that if we dropn 0 1 ␲ b , y q ␤, forn 1

˜

Ž .

˜

Ž .

varying ␤ g ⺢, on the process y ¬ W a, y it first touches W a, y at y . An n 1

Ž .

similar construction holds at y with a suitable choice for b2 2 see Figure 1 . Hence if we define

Vn

Ž .

c

1r3 1r3

˜

(19)

Ž . Ž . Ž . Ž . Ž . FIG. 1. The function p c , yn 0 straight line and parabolas ␲ b , y and ␲ b , y dottedn 1 n 2

˜Ž .

touching the process y¬ W a, y at y and y .n 1 2

Ž .

then from the above construction, it follows that the process c¬ V c has an

w x < <

jump in the interval b , b1 2 of maximal size y1y y F 2 K log n. Since2 1

XŽ . XŽ . Ž . Ž .

p c , yn 0 i s␲ b , y , for i s 1, 2, it follows from conditions A1 ᎐ A3 thatn i i

there exists a constant K3) 0, only depending on f, such that

<b y c F K y n< y1r3log n, is 1, 2.

i 0 3 i

Ž . w x

Because c0g yh, h , this means that the interval b , b is contained in1 2

2 2

y1r3 y1r3

Ins yK h k n

ž

4

Ž

Ž

log n

.

.

, K4

Ž

hk n

Ž

log n

.

.

/

Ž .

for some K4) 1 k K K . We conclude that, on the event A , we have that1 3 n

WŽ . Ž y1r3 y1r3.

if c¬ Vn c jumps in the interval ay hn , aq hn , then the

pro-␲Ž .

˜

Ž .

cess c¬ V c jumps in the interval I . However, the process y ¬ W a, yn n nŽ .

is distributed like Brownian motion W, so Vn c is distributed as

1r3 1r3 y1 2

sup y



g yn F t , n

Ž

0

.

Ž

1y F t

Ž

0

.

.

: W y

Ž

.

y ca y y␣ y is maximal .n

4

On the event A , this random variable is only different fromn

2 c V cn

Ž .

s argmax W y y

½

Ž

.

␣ y qn

ž

/

5

, 2 ayg⺢ n Ž . w x

if V c is outsiden yK log n, K log n . Hence1 1



4

P V



n jumps in I , An n

4

F P V jumps in I , An n n

q P sup V c ) K log n, A .

½

n

Ž .

1 n

5

(20)

According to Lemma 3.3, the first probability is of the order hk Žny1r3Žlog n.2.. From the monotonicity of the process c¬ V c , propertynŽ . Ž3.16 , the stationarity of the process c. ¬␰ c and Lemma 3.3, it follows thatŽ . the second probability is of smaller order. This proves the result for VW

.

n B

Ž . < <

Turning to the Brownian bridge and the process c¬ V c , for c F 1 letn VB a, c s VB aq ny1r3c q n1r3



g aq ny1r3c y g a

4

Ž

.

Ž

.

Ž

.

Ž .

n n and

˜

1r6 y1r3 Bn

Ž

a, y

.

s n



Bn

Ž

F g a

Ž

Ž .

.

q n y

.

y B F g an

Ž

Ž

Ž .

.

.

4

. Then VB a, c

Ž

.

n 1r3 1r3

˜

s sup H y g yn

½

n

Ž

.

t , n0

Ž

1yt0

.

: Bn

Ž

a, y

.

yp c, y is maximal ,n

Ž

.

5

Ž . Ž . Ž .

where p c, y is defined in 3.19 . Now definen ␺ c byn 1r3 1r3 ␺ c s sup y g yn F t , nn

Ž .

½

Ž

0

.

Ž

1y F t

Ž

0

.

.

:

˜

ky1r6 Bn

Ž

a, y

.

y p c y nn

Ž

a␰ , y is maximal .n

.

5

B Ž . Ž Ž y1r6 .. Ž .

Then Vn a, c s Hn ␺ c q nn a␰ . Using 3.4 , we haven ␺ cn

Ž .

1r3 1r3

˜

s sup yg yn F t , n

½

Ž

0

.

Ž

1yF t

Ž

0

.

.

: Wn

Ž

a, y

.

yq c, y is maximal ,n

Ž

.

5

˜

Ž .

where W is defined in 3.18 andn

q

Ž

c, y

.

s ny1r6␰ y y n1r3yq n1r3 aq ny1r3cy ny1r2a␰ H y .

Ž

.

Ž

.

n n n n

Consider the event AXnl AYn where

X B

Ans V



n

Ž

a, c

.

F log n, for all c g yh, h

Ž

.

4

and

Y < < 1r6

Ans



␰ F n rlog n .n

4

Ž X .c4 Ž Ž .3.

Similar to the event A , we have that P An n is of the order exp yC log n .

Ž Y.c4 Ž Ž 1r6 ..

Furthermore, P An s 2 1 y ⌽ n rlog n , which is of smaller order than

y1r3Ž .2 X Y

n log n . Hence we can restrict ourselves to the event Anl A . Nown

BŽ . Ž y1r3 y1r3.

suppose that c¬ V c jumps in the interval a y hnn , aq hn . This

Ž . Ž y1r6

means that the process c¬␺ c jumps in the interval yh q nn a␰ , h qn y1r6 .

n a␰ . In that case a completely similar argument as before, involving an

Ž . Ž .

comparison of the derivatives of q c, y and the parabolan ␲ c, y defined inn Ž3.20 , yields that there exists a constant K. 5) 0, only depending on f, such

Ž .

that the process c¬ V c jumps in the intervaln

2 2

X y1r3 y1r3

Ins yK h k n5

Ž

Ž

log n

.

.

, K5

Ž

hk n

Ž

log n

.

.

.

Hence on the event AXnl AYn, it follows that the probability that the process

Ž . X

c¬ V c has a jump in the interval I , is bounded by a probability of then n

Ž y1r3Ž .2. B

Cytaty

Powiązane dokumenty

Assume that the amount (in USD) a random consumer is willing to spend yearly on water consump- tion follows a uniform distribution on the interval [0, θ], where θ &gt; 0 is an

The following easy result shows that countably incomplete ultrapowers of infinite structures are always non-trivial..

The radius of the circle circumscribing this triangle is equal to:A. The centre of the circle

This abstract result provides an elementary proof of the existence of bifurcation intervals for some eigenvalue problems with nondifferentiable nonlinearities1. All the results

We suggest in this paper a method for assessing the validity of the assumption of normal distribution of random errors in a two-factor split-plot design.. The vector

The new tool here is an improved version of a result about enumerating certain lattice points due to E.. A result about enumerating certain

The purpose of this section is to develop the method of proof of Theorem 2 and prove the following theorem..

In this paper, the packing constant for a general type of sequence spaces is discussed, and a uniform and simple formula is obtained. Afterwards Zaanen showed