• Nie Znaleziono Wyników

POISSON SAMPLING FOR SPECTRAL ESTIMATION IN PERIODICALLY CORRELATED PROCESSES

N/A
N/A
Protected

Academic year: 2021

Share "POISSON SAMPLING FOR SPECTRAL ESTIMATION IN PERIODICALLY CORRELATED PROCESSES"

Copied!
40
0
0

Pełen tekst

(1)

V. M O N S A N (Rouen)

POISSON SAMPLING FOR SPECTRAL ESTIMATION IN PERIODICALLY CORRELATED PROCESSES

Abstract. We study estimation problems for periodically correlated, non gaussian processes. We estimate the correlation functions and the spectral densities from continuous-time samples. From a random time sample, we construct three types of estimators for the spectral densities and we prove their consistency.

1. Introduction. The processes we encounter in applications are often assumed to be stationary in the wide sense. Many problems pertaining to these processes have been studied. However, in practice, the stationary hypothesis is not always valid. In that case, we say that the processes are nonstationary. In this paper, we consider processes satisfying for all s, t,

E(X(t + T )) = E(X(t)) and

E(X(s + T )X(t + T )) = E(X(s)X(t))

where T is a fixed nonnegative number, for example the noise of a periodic oscillator.

Such processes are called periodically correlated up to order two. They are encountered in meteorology, in communications and also in radio-physics.

H. L. Hurd [4] has found estimators for the characteristics of periodically correlated processes from continuous-time samples. He showed these estima- tors are consistent if the process is gaussian. We will show the consistency of these estimators under certain conditions on the fourth cumulant function (the process is not assumed to be gaussian). However, numerical compu- tation of the estimators leads to a discretization of the process. Thus, it is better to estimate the characteristics from discrete samples (periodic or

1991 Mathematics Subject Classification: 62M10, 62M99, 62G07.

Key words and phrases: periodically correlated processes, Poisson sampling, quartic- mean consistency, spectral density functions.

(2)

random). We will see that these results are valid for almost periodically correlated harmonizable gaussian processes.

2. Generalities

Definition. A second order real process X(t) is called periodically cor- related up to order 2 with period T if

∀t ∈ R , E{X(t)} = m(t) = m(t + T ) and

∀s, t ∈ R , E{X(s)X(t)} = R(s, t) = R(s + T, t + T ) . If X(t) is periodically correlated, then

B(t, u) = R(t + u, t) ,

the autocovariance function of the process, is periodic in t with period T . For fixed u, we can suppose that B(·, u) ∈ L

1

[0, T ] and so we assume the Fourier series representation

B(t, u) =

X

k=−∞

B

k

(u) exp

 i 2π

T kt



where the coefficient functions are given by B

k

(u) = 1

T

T

R

0

B(t, u) exp



− i 2π T kt



dt (k ∈ Z) . If we assume that B

k

(·) ∈ L

1

(R), then

g

k

(ω) = 1 2π

R

−∞

B

k

(u) exp(−iωu) du

exists for k ∈ Z. The functions g

k

(ω) (k ∈ Z) are called the spectral density functions of X(t). For B

k

(·) and g

k

(·), we have the following properties:

(2.1)

B

k

(−u) = exp



− i 2π T ku

 B

k

(u) , g

k

(ω) = 1

R

0



exp(−iωu) + exp

 i



ω − 2π T k



B

k

(u) du ,

g

k

(ω) = g

k

 2π T k − ω

 , g

k

(ω) = g

−k

(−ω) .

Definition. A real process X(t) having finite moments of order p is

called periodically correlated up to order p with period T if for t

1

, . . . , t

n

∈ R

(3)

and for p

1

, . . . , p

n

∈ N satisfying P

n

i=1

p

i

≤ p,

E{X(t

1

+ T )

p1

. . . X(t

n

+ T )

pn

} = E{X(t

1

)

p1

. . . X(t

n

)

pn

}.

We notice that if X(t) is PC up to order p then X(t) is PC up to order q ≤ p. Throughout the paper, the term PC processes will be used for all periodically correlated processes up to order 2.

Assuming X(t) is PC up to order 4, the functions

ν(t, u

1

, u

2

, u

3

) = E(X(t)X(t + u

1

)X(t + u

2

)X(t + u

3

)) and

K(t, u

1

, u

2

, u

3

) = ν(t, u

1

, u

2

, u

3

) − B(t, u

1

)B(t + u

2

, u

3

− u

2

) (2.2)

− B(t, u

2

)B(t + u

3

, u

1

− u

2

)

− B(t, u

3

)B(t + u

1

, u

2

− u

1

)

are periodic in t with period T . The last function is called the fourth cu- mulant function. We see that if X(t) is gaussian, then K is identically zero.

3. Estimation from continuous-time samples. We assume through- out that

(1) X(t) is PC up to order 4,

(2) X(t) has uniformly bounded fourth moment, E{X(t)

4

} ≤ M , (3) m(t) ≡ 0.

From a sample of X(t), 0 ≤ t ≤ A, let us estimate:

• B

k

(u) by

B b

k

(A, u) =

 

 

 

 

 1 A

A−u

R

0

X(t)X(t + u) exp



− i 2π T kt



dt if u ≥ 0, 1

A

A

R

−u

X(t)X(t + u) exp



− i 2π T kt



dt if u < 0,

• g

k

(ω) by

b g

k

(A, ω) = 1 2π

A

R

−A

h(B

A

v) b B

k

(A, v) exp(−iλv) dv

= 1 B

A

R

−∞

H  u − λ B

A



g

k

(A, u) du where

g

k

(A, λ) = 1

2πA I

A

(λ)I

A

 λ − 2π

T k



(4)

with I

A

(λ) = R

A

0

X(t) exp(−iλt) dt; B

A

is a nonnegative function of A for which lim

A→∞

B

A

= 0; h is an even, integrable function for which |h(t)|

≤ M

0

for t, h(0) = 1, and H(ω) = R

−∞

h(t) exp(−iωt) dt.

Proposition 3.1. If X(t) is PC up to order 4 with R

−∞

R

T

0

B(t, u)

2

dt du

< ∞ and

∀u ∈ R ,

R

−∞

T

R

0

|K(t, u, v, v + u)| dt dv < ∞ , then

∀u ∈ R , lim

A→∞

E{| b B

k

(A, u) − B

k

(u)|

2

} = 0 . P r o o f. The proof will be given for fixed u ≥ 0. Let

J

k

(A) = 1 A

A−u

R

0

(X(t + u)X(t) − B(t, u)) exp



− i 2π T kt

 dt

= b B

k

(A, u) − E{ b B

k

(A, u)} .

It suffices to show that lim

A→∞

E{|J

k

(A)|

2

} = 0 since

E{| b B

k

(A, u) − B

k

(u)|

2

}

1/2

≤ E{|J

k

(A)|

2

}

1/2

+ |E( b B

k

(A, u)) − B

k

(u)| . Using (2.2), we have

E{|J

k

(A)|

2

} ≤ 1 A

2

A−u

R

0 A−u

R

0

|B(s − t, t + u)B(s − t, t)| ds dt

+ 1 A

2

A−u

R

0 A−u

R

0

|B(t, s − t + u)B(s, t − s + u)| ds dt

+ 1 A

2

A−u

R

0 A−u

R

0

|K(s, u, t − s, t − s + u)| ds dt .

The first and second terms go to zero as A goes to infinity ([4], Prop. 6).

Let us consider the third expression:

1 A

2

A−u

R

0 A−u

R

0

|K(s, u, t − s, t − s + u)| ds dt

≤ 1 A

2

R

−∞

2A

R

0

|K(s, u, t, t + u)| ds dt

≤ 1 A

2



1 +  2A T



R

−∞

dt

T

R

0

|K(s, u, t, t + u)| ds = O  1 A



where [x] is the integer part of x.

(5)

Proposition 3.2. Let X(t) be PC up to order 4, with K ∈ L

1

([0, T ]×R

3

) and

R

−∞

 R

T

0

B(t, u)

2

dt 

1/2

du < ∞ . Then, for all ω

1

, ω

2

≥ 0 and for all j, k ∈ Z,

A→∞

lim AB

A

|cov{ g b

j

(A, ω

1

), b g

k

(A, ω

2

)}|

≤ 2 2π

2

R

0

|h(z)|

2

dz

R

−∞

du

1

R

−∞

S(u + u

1

)S(u) du < ∞

where S(u)

2

= T

−1

R

T

0

B(t, u)

2

dt = P

k=−∞

|B

k

(u)|

2

. P r o o f. We have

AB

A

cov{ b g

j

(A, ω

1

), g

k

(A, ω

2

)}

= AB

A

2

A

R

−A A

R

−A

h(B

A

v

1

)h(B

A

v

2

)

× cov{ b B

j

(A, v

1

), b B

k

(A, v

2

)} exp(−iω

1

v

1

+ iω

2

v

2

) dv

1

dv

2

= I

1

+ I

2

+ I

3

where I

1

= 1

A

2

A−v1

R

0

A−v2

R

0

B(t + v

2

, s + v

1

− t − v

2

)B(t, s − t)

× exp



− i 2π

T (js − kt)

 ds dt ,

I

2

= 1 A

2

A−v1

R

0

A−v2

R

0

B(t, s − t + v

2

)B(t + v

2

, s − t − v

2

)

× exp



− i 2π

T (js − kt)

 ds dt ,

I

3

= 1 A

2

A−v1

R

0

A−v2

R

0

K(t, v

2

, s − t, s − t + v

1

) exp



− i 2π

T (js − kt)

 ds dt , and

A→∞

lim (|I

1

| + |I

2

|) ≤ C 2π

2

R

0

|h(z)|

2

dz

R

−∞

du

1

R

−∞

S(u + u

1

)S(u) du

([4], Th. 3 and Prop. 7).

(6)

On the other hand,

|I

3

| ≤ 4B

A

2

A

A

R

−A A

R

−A

|h(B

A

v

1

)h(B

A

v

2

)|

×  R

2A

0 2A

R

0

|K(t, v

2

, s − t, s − t + v

1

)| ds dt 

dv

1

dv

2

≤ 2M

2

B

A

π

2

A



1 +  2A T



T

R

0

R

R3

|K(t, u

1

, u

2

, u

3

)| dt du

1

du

2

du

3

= O(B

A

) .

If we choose B

A

so that lim

A→∞

AB

A

= ∞, we obtain the consistency of the estimator b g

k

(A, ω).

4. Estimation from periodic sampling. Let X(t) be a PC process with period T and with autocovariance function

B(t, u) = X

k∈Z

B

k

(u) exp

 i 2π

T kt

 .

Let h > 0; consider the process Y

n

= X(nh) with n ∈ Z. By analogy with the stationary case, we can look if it is possible to estimate the characteristics of X(t) with a sample from Y

n

. Let B

h

(m, n) = E(Y

m+n

Y

m

), which is the autocovariance function of Y

n

. Assume first that T /h = N is a nonnegative integer; then for all integers m and n, B

h

(m + N, n) = B

h

(m, n), therefore Y

n

is PC in discrete time. In this case, we can write

B

h

(m, n) =

N −1

X

l=0

B

lh

(n) exp

 i 2π

N ml

 with

B

lh

(n) = 1 N

N −1

X

m=0

B

h

(m, n) exp



− i 2π N ml

 . On the other hand,

B

h

(m, n) = B(mh, nh) =

X

k=−∞

B

k

(nh) exp

 i 2π

T kmh



=

N −1

X

l=0

X

k=−∞

B

jN +l

(nh) exp

 i 2π

N (jN + l)m



=

N −1

X

l=0

 X

j=−∞

B

jN +l

(nh)

 exp

 i 2π

N lm

 , so B

lh

(n) = P

j=−∞

B

jN +l

(nh).

(7)

We will show that periodic sampling involves aliasing, that is, we can find two different autocovariance functions B

1

(t, u), B

2

(t, u) such that B

1h

(m, n)

= B

2h

(m, n) for all m and n. In this case, we say that B

1

(t, u) and B

2

(t, u) are aliases. To show that, for fixed u, assume B(t, u) ∈ L

2

[0, T ] and let for k ∈ Z,

B

p,k

(u) = cos  2π T pu

 B

k

(u)

where p is a nonzero integer. For fixed p, B

p,k

and B

k

are two different functions of u, so, in L

2

[0, T ], the function

B

p

(t, u) = X

k∈Z

B

p,k

(u) exp

 i 2π

T kt



is different from B(t, u).

B

p

(t, u) is the autocovariance function of a PC process with period T . In fact ([3], Theorem 1), it suffices to show that for integers k

1

, . . . , k

n

, real numbers u

1

, . . . , u

n

, and complex numbers x

1

, . . . , x

n

,

A =

n

X

p,q=1

x

p

x

q

B

p,kpkq

(u

p

− u

q

) ≥ 0 with

B

p,jk

(u) = B

p,k−j

(u) exp

 i 2π

T ju



= 1 2

 exp

 i 2π

h pu

 + exp



− i 2π h pu



B

jk

(u) where B

jk

(u) = B

k−j

exp(i

T

ju). Now

2A =

n

X

i,j=1

x

i

x

j

exp

 i 2π

h (u

i

− u

j

)



B

kikj

(u

i

− u

j

)

+

n

X

i,j=1

x

i

x

j

exp



− i 2π

h (u

i

− u

j

)



B

kikj

(u

i

− u

j

)

=

n

X

i,j=1

x

i

exp

 i 2π

h u

i

 x

j

exp

 i 2π

h u

j



B

kikj

(u

i

− u

j

)

+

n

X

i,j=1

x

i

exp



− i 2π h u

i

 x

j

exp



− i 2π h u

j



B

kikj

(u

i

− u

j

) ≥ 0 ,

since B(t, u) is the autocovariance function of a PC process with period T .

(8)

On the other hand,

X

j=−∞

B

p,jN +l

(nh) =

X

j=−∞

B

jN +l

(nh) cos  2π h pnh



=

X

j=−∞

B

jN +l

(nh) and therefore B

ph

(m, n) = B

h

(m, n).

When T /h is not an integer, let r(u) denote any real even function in L

1

∩ L

2

, having a continuous second derivative and vanishing at the points t

n

= nh for which r

0

(0) = r

00

(0) = 0. For example, we can take r so that on ]t

n

, t

n+1

[,

r(u) = exp

 1

(u + t

n

+ h/2)

2

− h

2

/4

 . Define a

k

(u) by

a

0

(u) = a

0

(−u) = r(u) for u ≥ 0, a

k

(u) = 1

k

2

r(u) and a

k

(−u) = exp



− i 2π T ku

 a

k

(u)

for k ≥ 1 and u ≥ 0 , a

−k

(u) = a

k

(u) for k ≥ 1 and u ∈ R .

The functions a

k

(u) have continuous second derivatives since r

0

(0) = r

00

(0) = 0, vanishing at the points t

n

= nh. We also have a

k

∈ L

1

∩ L

2

.

For j, k ∈ Z, let

a

jk

(u) = a

k−j

(u) exp

 i 2π

T ju

 , A

jk

(ω) = 1

R

−∞

a

jk

(ω) exp(−iωu) du . We have

a

jk

(−u) = a

jk

(u) , a

p−j,q−j

(u) exp

 i 2π

T ju



= a

pq

(u) , A

jk

(ω) = A

kj

(ω) , A

p−j,q−j

 ω − 2π

T j



= A

pq

(ω) and A

jk

∈ L

1

∩ L

2

. For ω ∈ [0, 2π/T [, there exists A

(i)jk

, i = 1, 2, so that [A

(i)jk

(ω)]

j,k

, i = 1, 2, are nonnegative hermitian matrices, sup

i=1,2

{|A

(i)jk

(ω)|} ≤ A

jk

(ω) and A

jk

(ω) = A

(1)jk

(ω) − A

(2)jk

(ω). Thus for ω ∈ [

T

j,

T

(j + 1)[, let for i = 1, 2,

A

(i)jk

(ω) = A

(i)p−j,q−j

 ω − 2π

T j



, p, q ∈ Z ,

(9)

B

jk(i)

(u) =

R

−∞

A

(i)jk

(u) exp(iωu) dω = B

0,k−j(i)

(u) exp

 i 2π

T ju

 .

Therefore putting B

k(i)

(u) = B

0,k(i)

(u), we have B

jk(i)

(u) = B

k−j(i)

(u) exp

 i 2π

T ju

 .

We can easily verify that |B

k(i)

(u)| ≤ 1/k

2

for k 6= 0, therefore B

(i)

(t, u) = X

k∈Z

B

k(i)

(u) exp

 i 2π

T ju



is well defined. We have B

jk(1)

(t

n

) − B

jk(2)

(t

n

) =

R

−∞

(A

(1)jk

(ω) − A

(2)jk

(ω)) exp(iωt

n

) dω

= a

jk

(t

n

) = 0 ,

therefore B

(1)k

(t

n

) = B

k(2)

(t

n

) and for all m, n ∈ Z, B

(1)

(t, t

n

) = B

(2)

(t, t

n

), B

(1)

(t

m

, t

n

) = B

(2)

(t

m

, t

n

). We can easily show that for i = 1, 2, B

(i)

(t, u) is the autocovariance function of a PC process with period T . We conclude therefore that B

(1)

(t, u) and B

(2)

(t, u) are aliases. More generally, the family B

α

(t, u) = αB

(1)

(t, u) + (1 − α)B

(2)

(t, u), with 0 ≤ α ≤ 1, is a family of alias functions.

5. Estimation from random samples

(a) General case. Let X(t) be a PC process with period T for which the autocovariance function

B(t, u) = X

k∈Z

B

k

(u) exp



− i 2π T kt



and B(t, u) ∈ L

1

([0, T ] × R). We assume that the spectral density functions g

k

belong to L

1

∩ L

2

. Let (t

n

)

n≥0

be a nondecreasing sequence of random variables such that t

0

= 0 and t

n

= α

n

+t

n−1

, where (α

n

)

n≥1

is a sequence of identically distributed, independent random variables, with Eα

n

< ∞ and common probability density f (u), independent of X(t). We assume that f ∈ L

2

(R) and that f (u) = 0 for u ≤ 0, which implies that t

n

≥ t

n−1

. Let f

k

(u) be the probability density function of the sum of k random variables α

i

. Then if φ(ω) is the characteristic function of α

i

,

φ(ω)

k

=

R

−∞

f

k

(u) exp(iωu) du

(10)

is the characteristic function of the sum of the α

i

. Using Parseval’s theorem, we get

R

−∞

f

k

(u)

2

du =

R

−∞

|φ(ω)|

2k

dω ≤ 1 2π

R

−∞

|φ(ω)|

2

R

−∞

f (u)

2

du < ∞ ,

therefore f

k

∈ L

2

and φ

k

∈ L

2

. Finally, we assume that for u ≥ 0, X

k≥1

f

k

(u) = K < ∞ .

Let us now consider the process Y

n

= X(t

n

), n ≥ 0; for n ≥ 0, let (5.1) C

kα

(n) = 1

KT X

l≥1

E



Y

l+n

Y

l

exp



− i 2π T kt

l

 1

{tl<T }

 . For n ≥ 1, we have

E



Y

l+n

Y

l

exp



− i 2π T kt

l



1

{tl<T }



= E

 E



X(t

l+n

)X(t

l

) exp



− i 2π T kt

l

 1

{tl<T }

(t

n

)



= E



B(t

l

, t

l+n

− t

l

) exp



− i 2π T kt

l



1

{tl<T }



=

R

−∞

T

R

0

B(t, u) exp



− i 2π T kt



f

l

(t)f

n

(u) dt du ,

since t

l+n

− t

l

with density f

n

and t

l

with density f

l

are independent. On the other hand,

X

l≥1

R

−∞

T

R

0

B(t, u) exp



− i 2π T kt



f

l

(t)f

n

(u)

dt du

≤ K

R

−∞

T

R

0

|B(t, u)| dt du < ∞ , so, for n ≥ 1, we have

C

kα

(n) = 1 KT

X

l≥1

E



Y

l+n

Y

l

exp



− i 2π T kt

l

 1

{tl<T }



=

R

−∞

 1 T

T

R

0

B(t, u) exp



− i 2π T kt

 dt



f

n

(u) du

(11)

=

R

−∞

B

k

(u)f

n

(u) du =

R

0

B

k

(u)f

n

(u) du . Since

B

k

(u) =

R

−∞

g

k

(ω) exp(iωu) dω and f

n

(u) = 1 2π

R

−∞

φ(ω)

n

exp(−iωu) dω, we have, for n ≥ 1,

C

kα

(n) =

R

−∞

g

k

(ω)φ(ω)

n

dω . On the other hand,

C

kα

(0) = 1 KT

X

l≥1

E



X(t

l

)

2

exp



− i 2π T kt

l



1

{tl<T }



= X

l≥1 T

R

0

B(t, 0) exp



− i 2π T kt

 f

l

(t) dt

= B

k

(0) =

R

−∞

g

k

(ω) dω . Finally, for n ≥ 0 and k ∈ Z, we have

(5.2) C

kα

(n) =

R

−∞

g

k

(ω)φ(ω)

n

dω .

Proposition 5.1. Random sampling is alias-free if the characteristic function φ(ω) takes no value more than once on the real axis.

P r o o f. For k ∈ Z, g

k

∈ L

1

∩ L

2

, and it suffices to apply Theorem 1 of [8] and (5.2).

Proposition 5.2. If the characteristic function φ(s) (where s ∈ C,

=(s) ≤ 0) takes the same value at two different points of the open upper half-plane, then aliasing occurs with random sampling.

P r o o f. We will show that for k ≥ 0, there exist A

k

∈ L

1

∩ L

2

not identically zero so that A

k

(ω) = A

k

(

T

k − ω) and R

−∞

A

k

(ω)φ(ω)

n

dω = 0 for n ≥ 0. As Fourier transformation is a unitary transformation of L

2

onto itself, if a

k

(u) = R

−∞

A

k

(ω) exp(iωu) dω for k ≥ 0 and A is the class of Fourier transforms of L

1

functions, we have to show that there exists a function a

k

∈ A ∩ L

2

not identically zero so that

(a) a

k

(−u) = exp



− i 2π T ku



a

k

(u) ,

(12)

(b)

R

0

a

k

(u)f

n

(u) du = 0 (n ≥ 1) ,

(c) a

k

(0) =

R

−∞

A

k

(ω) dω = 0 .

The existence of a

k

may be proved as in the proof of Theorem 2 of [8], p. 239.

Assume that A

k

is defined for k ≥ 0. For k < 0, let A

k

(−ω) = A

−k

(ω) and for k, j ∈ Z, A

jk

(ω) = A

k−j

(ω −

T

j). We have

A

jk

(ω) = A

k−j

 2π

T (k − j) −

 ω − 2π

T j



= A

j−k



ω − 2π T k



= A

kj

(ω) .

We find A

jk

(ω) which have the same properties as the properties used to construct alias autocovariance functions.

(b) Poisson sampling case. We assume throughout the paper that α

n

are r.v. with common exponential density probability, f (u) = β exp(−βu)1

R+

(u);

thus

f

n

(u) = β exp(−βu) (βu)

n−1

(n − 1)! 1

R+

(u) , φ(ω) = β β − iω and P

n≥1

f

n

(u) = β < ∞. It is easily seen from Proposition 5.1 that Poisson sampling is alias-free. The functions {f

n

(u) : n ≥ 1} form a complete sys- tem in L

2

[0, ∞[, therefore we have a complete orthogonal sequence {q

n

(u) : n ≥ 1} in L

2

[0, ∞[ such that q

n

(u) = b

n,1

f

1

(u) + . . . + b

n,n

f

n

(u) where

b

n,l

=  2 β



1/2

(−2)

l−1

C

n−1l−1

. For n ≥ 1, let

(5.3) γ

kα

(n) =

R

0

B

k

(u)q

n

(u) du = b

n,1

C

kα

(1) + . . . + b

n,n

C

kα

(n);

then in L

2

[0, ∞[,

(5.4) B

k

(u) = X

n≥1

γ

kα

(n)q

n

(u) (k ∈ Z) .

Using (2.4), we have in L

2

(R), (5.5) g

k

(ω) = X

n≥1

γ

kα

(n)



ψ

n

(ω) + ψ

n

 2π T k − ω



(k ∈ Z)

(13)

where

ψ

n

(ω) = 1 2π

R

0

exp(−iωu)q

n

(u) du .

Further, we will use (5.4) and (5.5) to find estimators of g

k

(ω) and B

k

(u).

First, we study kernel estimators of g

k

(ω).

Let H(ω) be an even, absolutely integrable function, with R

−∞

H(ω) dω

= 1. Let M

N

be a sequence of nonnegative numbers so that lim

N →∞

M

N

= ∞. Consider W

N

(ω) = M

N

H(M

N

ω), h(t) = R

−∞

H(ω) exp(iωt) dt and w

N

(t) = h(t/M

N

). Assuming that we get a sample X(t

1

), . . . , X(t

N

), we estimate g

k

(ω) by the following estimators:

g

(1)k

(N, ω) = 1 2πβN

N −1

X

n=1 N −n

X

l=1

w

N

(t

l+n

− t

l

)X(t

l+n

)X(t

l

) exp



− i 2π T kt

l



×

 exp

 i



ω − 2π T k



(t

l+n

− t

l

)



+ exp(−iω(t

l+n

− t

l

))



and

g

k(2)

(N, ω) = 1 2πβN

MN

X

n=1 N −n

X

l=1

X(t

l+n

)X(t

l

) exp



− i 2π T kt

l



×

 exp

 i



ω − 2π T k



(t

l+n

− t

l

)



+ exp(−iω(t

l+n

− t

l

))

 . Asymptotic properties of g

k(1)

(N, ω). We have

g

k(1)

(N, ω) = 1 4πβN

R

−∞

W

N

(u)

N −1

X

n=1 N −n

X

l=1

X(t

l+n

)X(t

l

) exp



− i 2π T kt

l



×

 exp

 i



ω + u − 2π T k



(t

l+n

− t

l

)



+ exp(−i(ω + u)(t

l+n

− t

l

))

 du

+ 1

4πβN

R

−∞

W

N

(u)

N −1

X

n=1 N −n

X

l=1

X(t

l+n

)X(t

l

) exp



− i 2π T kt

l



×

 exp

 i



ω − u − 2π T k



(t

l+n

− t

l

)



+ exp(−i(ω − u)(t

l+n

− t

l

))

 du

= I

1

+ I

2

.

(14)

Setting λ = ω + u (resp. λ = ω − u) in I

1

(resp. I

2

) and using the evenness of W

N

, we notice that I

1

= I

2

, therefore

g

k(1)

(N, ω) = 1 2πβN

R

−∞

W

N

(ω − λ)

N −1

X

n=1 N −n

X

l=1

X(t

l+n

)X(t

l

) exp



− i 2π T kt

l



×

 exp

 i

 λ − 2π

T k



(t

l+n

− t

l

)



+ exp(−iλ(t

l+n

− t

l

))

 dλ.

We notice that if ω 6= 0 then (5.6)

N

X

n=1

φ(ω)

n

=

φ(ω) φ(ω)

N

− 1 φ(ω) − 1

≤ 2

φ(ω) φ(ω) − 1

≤ 2β

|ω| . Theorem 5.1. If X(t) is a PC process with

R

−∞

 X

j6=k

|B

j

(u)|

|j − k| + |B

k

(u)|



du < ∞ , then

E{g

k(1)

(N, ω)} =

R

−∞

W

N

(ω − λ)g

k

(λ) dλ + o(1) .

P r o o f. We have E{g

(1)k

(N, ω)} = R

−∞

W

N

(ω − λ)I(λ) dλ where I(λ) = 1

2πβN E



N −1

X

n=1 N −n

X

l=1

 exp

 i

 λ − 2π

T k



(t

l+n

− t

l

)



+ exp(−iλ(t

l+n

− t

l

))



X(t

l+n

)X(t

l

) exp



− i 2π T kt

l



= 1

2πβN

N −1

X

n=1 N −n

X

l=1

R

0

R

0

X

j∈Z

B

j

(u) exp

 i 2π

T jt

 exp



− i 2π T kt



×

 exp

 i

 λ − 2π

T k

 u



+ exp(−iλu)



f

n

(u)f

l

(t) du dt . Since

R

−∞

 X

j6=k

|B

j

(u)|

|j − k| + |B

k

(u)|



du < ∞ (using Fubini’s theorem), we have

I(λ) = 1 2πβN

N −1

X

n=1

R

0

X

j∈Z

B

j

(u)

 exp

 i

 λ − 2π

T k

 u



(15)

+ exp(−iλu)

 f

n

(u)



N −n

X

l=1

R

0

exp

 i 2π

T (j − k)t

 f

l

(t) dt

 du

= I

1

(λ) + I

2

(λ) , where

I

1

(λ) = 1 2πβN

N −1

X

n=1

(N − n)

R

0

B

k

(u)

 exp

 i

 λ − 2π

T k

 u



+ exp(−iλu)



f

n

(u) du and

I

2

(λ) = 1 2πβN

N −1

X

n=1

R

0

 X

j6=k

B

j

(u)

 exp

 i

 λ − 2π

T k

 u



+ exp(−iλu)



N −n

X

l=1

φ  2π

T (j − k)



l



f

n

(u) du . Using (5.6) and Lemma 1 of [6], we have

|I

2

(λ)| ≤ 4 2πβN

N −1

X

n=1

R

0

 X

j6=k

|B

j

(u)| β

T

|j − k|



f

n

(u) du = O  1 N



where the O(1/N ) term is uniform in λ. Put e

kn

(λ) =

R

0

B

k

(u)

 exp

 i

 λ − 2π

T k

 u



+ exp(−iλu)



f

n

(u) du . Then

1 2πβ

X

n=1

e

kn

(λ) = 1 2π

R

0

B

k

(u)

 exp

 i

 λ − 2π

T

 u



+ exp(−iλu)



du = g

k

(λ) and therefore

I

1

(λ) = 1 2πβ

N −1

X

n=1

 1 − n

N

 e

kn

(λ)

= g

k

(λ) − 1 2πβ

X

n=N

e

kn

(λ) − 1 2πβN

N −1

X

n=1

ne

kn

(λ) . Since |e

kn

(λ)| ≤ 2 R

0

|B

k

(u)|f

n

(u) du, the series P

n≥1

e

kn

(λ) converges uni- formly (Lemma 1 of [6]); so P

n≥N

e

kn

(λ) and

N1

P

N −1

n=1

ne

kn

(λ) (Kronecker’s

(16)

lemma) converge uniformly to 0. Finally, E{g

k(1)

(N, ω)} =

R

−∞

W

N

(ω − λ)[g

k

(λ) + o(1) + O(1/N )] dλ

=

R

−∞

W

N

(ω − λ)g

k

(λ) dλ + o(1) .

Corollary 5.1. If X(t) is a PC process with

R

−∞

 X

j6=k

|B

j

(u)|

|j − k| + |B

k

(u)|



du < ∞ and R

0

|uB

k

(u)| du ≤ ∞, then E{g

k(1)

(N, ω)} =

R

−∞

W

N

(ω − λ)g

k

(λ) dλ + O  1 N

 .

P r o o f. We have

I

1

= g

k

(λ) − 1 2πβ

X

n≥N

e

kn

(λ) − 1 2πβN

N −1

X

n=1

ne

kn

(λ) and using Lemma 1(ii) of [6], we have

X

n≥N

e

kn

(λ) ≤ 2

N

X

n=N

n

R

0

|B

k

(u)|f

n

(u) du = O  1 N



and

1 2πβN

N −1

X

n=1

ne

kn

(λ)

≤ 1 N

N −1

X

n=1

n

R

0

|B

k

(u)|f

n

(u) du = O  1 N

 . We notice that the O term is independent of λ, which gives the result.

Corollary 5.2. If X(t) is a PC process with

R

−∞

 X

j6=k

|B

j

(u)|

|j − k| + |B

k

(u)|



du < ∞ ,

then g

k(1)

is an asymptotically unbiased estimator.

P r o o f. By (2.1) we see that g

k

(ω) is a continuous and bounded function.

On the other hand,

E{g

k(1)

(N, ω)} =

R

−∞

W

N

(ω − λ)g

k

(λ) dλ + o(1)

(17)

=

R

−∞

M

N

H(M

N

(ω − λ))g

k

(λ) dλ + o(1)

=

R

−∞

H(u)g

k



ω − u M

N



du + o(1) and we use Lebesgue’s theorem to conclude.

Corollary 5.3. In addition to the hypothesis of Theorem 5.1, let q be a positive integer such that u

q

B

k

(u) ∈ L

1

[0, ∞[ and assume h(t) is q times differentiable with bounded derivatives. Then

E{g

(1)k

(N, ω)} = g

k

(ω) +

q−1

X

l=1

i

l

h

(l)

(0)

M

Nl

g

k(l)

(ω) + O

 1 M

Nq



+ O  1 N



where f

(l)

is the derivative of f of order l.

The proof is identical to the proof of Corollary 1.2 of [5], p. 175.

To prove the consistency of the estimator g

(1)k

(N, λ), we make the fol- lowing assumptions:

(H

1

) (i)

R

−∞

 X

j6=k

|B

j

(u)|

|j − k| + |B

k

(u)|



du < ∞ ,

R

0

|uB

k

(u)| du < ∞ ,

(ii) X

j∈Z

|B

j

(0)|

R

0

|B

j

(u)| du < ∞ , X

j∈Z

|B

j

(0)|

2

< ∞ . (H

2

) For j ≥ 0, there exists h

j

(u) such that

(i) h

j

(u) is a continuous, even, nonnegative, integrable and nonde- creasing function on [0, ∞[,

(ii) |B

j

(u)| ≤ h

j

(u) , (iii)

R

0

R

0

 X

j1+j26=0

1

|j

1

+ j

2

| h

j1

(u

1

)h

j2

(u

2

)

+ X

j1+j2=0

h

j1

(u

1

)h

j2

(u

2

)



du

1

du

2

< ∞ ,

(iv)

R

0

R

0

 X

j1+j26=0

1

|j

1

+ j

2

| h

j1

(u

1

)h

j2

(u

2

)

+ X

j1+j2=0

h

j1

(u

1

)h

j2

(u

2

)



(u

1

+ u

2

) du

1

du

2

< ∞ . (H

3

) X(t) is a PC process up to order 4 and for j ∈ Z, there exist non-

negative functions q

j1

, q

2j

, q

j3

so that

(18)

|K

j

(u

1

, u

2

, u

3

)| ≤

3

Y

i=1

q

ji

(u

i

) ,

R

0

R

0

R

0

 X

j6=k

q

j1

(u

1

)q

2j

(u

2

)q

j3

(u

3

)

|j − k| + q

k1

(u

1

)q

2k

(u

2

)q

3k

(u

3

)



du

1

du

2

du

3

< ∞ . We also assume that the sequence M

N

satisfies lim

N →∞

M

N

/N = 0.

Let

p(l, n, k, λ) =

 exp

 i

 λ − 2π

T k



(t

l+n

− t

l

)



+ exp(−iλ(t

l+n

− t

l

))

 exp



− i 2π T kt

l

 . Then

(5.7) var{g

(1)k

(N, ω)}

=

R

−∞

R

−∞

W

N

(ω − λ)W

N

(ω − µ) cov{J

k

(N, λ); J

k

(N, µ)} dλ dµ where

J

k

(N, λ) = 1 2πβN

N −1

X

n=1 N −n

X

l=1

X(t

l+n

)X(t

l

)p(l, n, k, λ) . We have

E{J

k

(N, λ)J

k

(N, µ)}

= 1

(2πβN )

2

N −1

X

n1,n2=1 N −n1

X

l1=1 N −n2

X

l2=1

E(p(l

1

, n

1

, k, λ)p(l

2

, n

2

, k, µ)

×E{X(t

l1+n1

)X(t

l1

)X(t

l2+n2

)X(t

l2

) | (t

n

)}) , so cov{J

k

(N, λ); J

k

(N, µ)} = P

4

i=1

U

N,i

(k, λ, µ) − E{J

k

(N, λ)}E{J

k

(N, µ)}

where

(5.8a) U

N,1

(k, λ, µ)

= 1

(2πβN )

2

N −1

X

n1,n2=1

X

l1,l2∈R

E(p(l

1

, n

1

, k, λ)p(l

2

, n

2

, k, µ)

×B(t

l1

, t

l1+n1

− t

l1

)B(t

l2

, t

l2+n2

− t

l2

)) , (5.8b) U

N,2

(k, λ, µ)

= 1

(2πβN )

2

N −1

X

n1,n2=1

X

l1,l2∈R

E(p(l

1

, n

1

, k, λ)p(l

2

, n

2

, k, µ)

×B(t

l1

, t

l2

− t

l1

)B(t

l1+n1

, t

l2+n2

− t

l1+n1

)) ,

Cytaty

Powiązane dokumenty

The analysis also addresses contemporary reflections on the urban public space, contemporary architecture and Modernist paradigm in visual arts, which was rejected in

Niebawem ukaże się nowa praca z dziedziny teorii prawa; ukażą się też trzy nowe przekłady jego pióra z dziedziny filozofii współczesnej..

Be it a preposi- tion or an adverb, the OE word under is not devoid of lexical meaning and to my understanding was not undergoing a process of grammaticalisation contrary to

Do obowiązków administratora należało wówczas: celebrowanie nabożeństw, szafowanie sakra- mentów i sakramentaliów, przewodniczenie pogrzebom, głoszenie słowa Bożego,

istotę oraz zasady jest Deklaracja Zasad Międzynarodowej Obser- wacji Wyborów, a także stanowiący jej załącznik – Kodeks Postę- powania Międzynarodowych Obserwatorów Wyborów

Here, we bench- mark a set of commonly used GFP variants to analyze gene expres- sion in the low-GC-rich Gram-positive model organisms Bacillus subtilis, Streptococcus pneumoniae,

Penman method Penman (1948) derived the fol1owing equation for evaporation and he verified it for open water , bare soil and grass The combination methods consist of a com- bination

Rocznik Towarzystwa Literackiego imienia Adama Mickiewicza 6,