• Nie Znaleziono Wyników

Anna Janicka Statistics Mathematical

N/A
N/A
Protected

Academic year: 2021

Share "Anna Janicka Statistics Mathematical"

Copied!
23
0
0

Pełen tekst

(1)

Mathematical Statistics

Anna Janicka

Lecture VII, 6.04.2020

ESTIMATOR PROPERTIES, PART III

(2)

Plan for Today

1. Asymptotic properties of estimators – cont.

consistency

asymptotic normality

asymptotic efficiency

2. Consistency, asymptotic normality and asymptotic efficiency of MLE estimators

(3)

Consistency – reminder

Let X1, X2, ..., Xn ,... be an IID sample (of

independent random variables from the same distribution) . Let be a

sequence of estimators of the value g( ).

is a consistent estimator, if for all , for any  >0:

(i.e. converges to g( ) in probability) )

,..., ,

ˆ(X1 X2 Xn g

1 )

| ) ( )

,..., ,

ˆ( (|

lim 1 2

P g X X Xn g  

n

(4)

Strong consistency – reminder

Let X1, X2, ..., Xn ,... be an IID sample (of

independent random variables from the same distribution). Let be a

sequence of estimators of the value g( ).

is strong consistent, if for any :

(i.e. converges to g( ) almost surely) )

,..., ,

ˆ(X1 X2 Xn g

P

nlim gˆ(X1, X2,..., Xn ) g( )

1

(5)

Consistency – how to verify?

 From the definition: for example with the use of a version of the Chebyshev inequality:

Given that the MSE of an estimator is

we get a sufficient condition for consistency:

 From the LLN

2

))2

( )

( ) (

| ) ( )

(

(|

E g X g

g X

g

P

))2

( )

ˆ( ( ˆ)

,

(g E g X g

MSE

0 ˆ)

, (

lim

MSE g

n

(6)

Consistency – examples

 For any family of distributions with an

expected value: the sample mean is a consistent estimator of the expected value

( )=E (X1). Convergence from the SLLN.

 For distributions having a variance:

and

are consistent estimators of the variance

2( )=Var (X1). Convergence from the SLLN.

X

n

n

i i

n n X X

S 1

2 1

2 1

)

(

n

i i

n n X X

S 1

1 2

2 ( )

ˆ

(7)

Consistency – examples/properties

 An estimator may be unbiased but

unconsistent; eg. Tn(X1, X2, ..., Xn )=X1 as an estimator of  ( )=E (X1).

 An estimator may be biased but

consistent; eg. the biased estimator of the variance or any unbiased consistent estimator + 1/n.

(8)

Asymptotic normality

is an asymptotically normal estimator of g( ), if for any  there exists

2( ) such that, when n→

Convergence in distribution, i.e. for any a

in other words, the distribution of is for large n similar to

) ,...,

,

ˆ(X1 X2 Xn g

g ˆ ( X

1

, X

2

,..., X ) g ( )N ( 0 ,

2

( ))

n

n

  

D

ˆ( , ,..., ) ( )

( )

)

lim (n g X1 X2 X g a a

P n

n

) ,...,

,

ˆ(X1 X2 Xn g

) ),

(

( g

n2

N

(9)

Asymptotic normality – properties

 An asymptotically normal estimator is consistent (not necessarily strongly).

A similar condition to unbiasedness – the expected value of the asymptotic

distribution equals g( ) (but the estimator does not need to be unbiased).

Asymptotic variance defined as

or – the variance of the asymptotic n distribution

)

2(

)

2(

(10)

Asymptotic normality – what it is not

 For an asymptotically normal estimator we usually have:

but these properties needn’t hold, because convergence in distribution does not imply convergence of moments.

) ( )

,..., ,

ˆ( 1 2

g X X X g

E n  n

) ( )

,..., ,

ˆ(

var g X1 X2 Xn  n

2

n

(11)

Asymptotic normality – example

Let X1, X2, ..., Xn ,... be an IID sample from a distribution with mean  and variance  2. On the base of the CLT, for the sample

mean we have

In this case the asymptotic variance, , is equal to the estimator variance.

) ,

0 ( )

(X

N

2

n  D

n

2

(12)

Asymptotic normality – how to prove it

In many cases, the following is useful:

Delta Method. Let Tn be a sequence of

random variables such that for n→ we have

and let h:R→R be a function differentiable at point  such that h’()0. Then

, 2 are functions of

usually used when estimators are functions of statistics Tn, which can be easily shown co converge on the base of CLT

) ,

0 ( )

(T

N

2

n n  D

h(T ) h(

)

N(0,

2(h'(

))2 )

n n  D

(13)

Asymptotic normality – examples cont.

In an exponential model:

From CLT, we get

so from the Delta Method for h(t)=1/t:

so is an asymptotically normal (and consistent) estimator of .

MLE (  ) 

X1

) ,

0 ( )

(X 1 N 12 n  D

) ) (

, 0 ( )

( 2

) / 1 (

1 1

1

2

2



N

n D

X

X 1

(14)

Asymptotic efficiency

For an asymptotically normal estimator

of g( ) we define asymptotic efficiency as

where  2( )/n is the asymptotic variance, i.e.

for n→

gˆ(X1, X2,..., X ) g(

)

N(0,

2(

))

n n  D

) ,...,

,

ˆ(X1 X2 Xn g

 

), ( )

(

) ( ) '

( ˆ

as.ef 2

2

In

n g g

 

) ( ) (

) ( ) '

( ˆ as.ef

1 2

2

I g g

modification of the definition of efficiency to the limit case, with the asymptotic

variance in place of the normal variance

(15)

Relative asymptotic efficiency

Relative asymptotic efficiency for asymptotically normal estimators and

ˆ ) ( as.ef

ˆ ) ( as.ef )

( ) ) (

, ˆ ( ˆ

as.ef

2 1 2

1 2 2 2

1 g

g g

g  

) ˆ1(X

g gˆ2(X )

Note. A less (asymptotically) efficient estimator may have other properties, which will make it preferable to a more efficient one.

(16)

Relative asymptotic efficiency – examples.

Is the mean better than the median?

Depends on the distribution!

a) normal model N(,  2):

b) Laplace model Lapl(, )

c) some distributions do not have a mean...

Theorem: For a sample from a continuous distribution with density f(x), the sample median is an asymptotically normal estimator for the median m (provided the density is continuous and 0 at point m):

X

N(0, 2)

n D

md N(0,22 )

n D

1 )

, d m (

as.ef X 2

X

N(0,22 )

n D

md N(0,12 )

n D as.ef(meˆd, X) 2 1

md mD N(0,4(f(1m))2 )

n 

(17)

Consistency of ML estimators

Let X1, X2, ..., Xn,... be a sample from a distribution with density f (x). If  R is an open set, and:

all densities f have the same support;

 the equation has exactly one solution, .

Then is the MLE( ) and it is consistent

Note. MLE estimators do not have to be unbiased!

0 ) ( ln

L d

d

ˆ

ˆ

(18)

Asymptotic normality of ML estimators

Let X1, X2, ..., Xn,... be a sample with density f (x), such that  R is open, and is a consistent

m.l.e. (for example, fulfills the assumptions of the previous theorem), and

 exists

 Fisher Information may be calculated, 0<I1( )<

 the order of integration with respect to x and derivation with respect to may be changed

then is asymptotically normal and

ˆ

) (

2 ln

2

L

d d

ˆ

 

ˆ

D N(0, I1(1 ))

n  

(19)

Asymptotic normality of ML estimators

Additionally, if g:R→R is a function

differentiable at point , such that g’( )  0, and is MLE(g( )), then

ˆ (

1

,

2

,..., ) ( )( 0 ,

( '(( )))

)

1

2

D gI

n

g N

X X

X g

n   

) ,...,

,

ˆ(X1 X2 Xn g

(20)

Asymptotic efficiency of ML estimators

If the assumptions of the previous theorems are fulfilled, then the ML estimator (of  or g( )) is asymptotically efficient.

(21)

Asymptotic normality and efficiency of ML estimators – examples

 In the normal model: the mean is an asymptotically efficient estimator of 

 In the Laplace model: the median is an asymptotically efficient estimator of 

(22)

Summary: basic (point) estimator properties

 bias

 variance

 MSE

 efficiency

 consistency

 asymptotic normality

 asymptotic efficiency

(23)

Cytaty

Powiązane dokumenty

MICKIEWICZ UNIVERSITY, Poznan 1NSTYTUT MATEMATYKI, UNIWERSYTET

ROCZNIKI POLSKIEGO TOWARZYSTWA MATEMATYCZNEGO Seria 1: PRACE MATEMATYCZNE X II (1969) ANNALES SOCIETATIS MATHEMATICAE POLONAE Series 1: СОММЕ NT ATIONES MATHEMATICAE

in this case, the type I error is higher than the significance level assumed for each simple test..... ANOVA test

Our knowledge about the unknown parameters is described by means of probability distributions, and additional knowledge may affect our

We believe the firm is mistaken and want to execute a test where it is possible to conclude that the firm probably is mistaken.... When is the alternative one-sided and when is it

[1] Billingsley, P., Convergence of Probability Measures, John Wiley &amp; Sons, Inc., New York–London–Sydney, 1968.. [2] Fernandez, P., A note on convergence in

If the distribution of is intractable, an approximation to P(1V« € A) 1S available when the sample size n is large in the case when the sequence J Wn J converges in distribution to

On Almost Sure Convergence of Asymptotic Martingales 89 Proofs of the above generalizations are similar to proofs of the corresponding well-known