• Nie Znaleziono Wyników

Quality control, cont. We maximize or equivalently maximize

N/A
N/A
Protected

Academic year: 2021

Share "Quality control, cont. We maximize or equivalently maximize"

Copied!
23
0
0

Pełen tekst

(1)

Mathematical Statistics Anna Janicka

Lecture V, 18.03.2019

PROPERTIES OF ESTIMATORS, PART I

(2)

Plan for today

1. Maximum likelihood estimation examples – cont.

2. Basic estimator properties:

estimator bias

unbiased estimators

3. Measures of quality: comparing estimators

mean square error

incomparable estimators

minimum-variance unbiased estimator

(3)

MLE – Example 1.

Quality control, cont. We maximize or equivalently maximize

i.e. solve solution:

x n x

x n x

X P

L 



=

=

= ( ) (1 )

)

(θ θ θ θ

) 1

ln(

) (

) ln(

ln )

) 1

ln((

) ln(

ln )

(θ θ θ  + θ + θ



=

+

 +



= x n x

x n x

n

l x n x

1 0 )

(

' =

− −

= θ θ

θ x n x l

n

MLE ( θ ) = θ ˆ

ML

= x

(4)

MLE – Example 3.

Normal model: X

1

, X

2

, ..., X

n

are a sample from N( µ , σ

2

). µ , σ unknown.

we solve

we get:

( ) ( )

( )

(

2 2

)

2 1 2

2 2

1 2

1

2 ln

) 2 ln(

) (

exp ln

) , (

2 2

µ µ

σ π

µ σ

µ

σ σ σ

π

n x

x n

x l

i i

n

i n

+ Σ

− Σ

=

=



 

=

− Σ

=

= +

Σ

− Σ

+

=

0

0 )

2 (

2 2

3

1

2 1 2

σ µ µ σ

σ σ

σ

µ µ

n i

l

i i

l n

x n x

x

1 2

2

( )

ˆ ,

ˆ

ML

= X σ

ML

=

n

x

i

X

µ

(5)

Estimator properties

Aren’t the errors too large? Do we estimate what we want?

is supposed to approximate θ .

In general: is to approximate g( θ ).

What do we want? Small error. But:

errors are random variables (data are RV)

→ we can only control the expected value the error depends on the unknown

θ

.

→ we can’t do anything about it...

θ ˆ

)

ˆ X (

g

(6)

Estimator bias

If is an estimator of θ :

bias of the estimator is equal to

If is an estimator of g( θ ):

bias of the estimator is equal to

/ is unbiased, if

other notations, e.g.:

) ˆ X ( θ

) ( )

ˆ ( ))

( )

ˆ ( ( )

( θ E

θ

g X g θ E

θ

g X g θ

b = − = −

θ θ

θ θ

θ ) =

θ

( ˆ ( ) − ) =

θ

ˆ ( ) −

( E X E X

b

) ˆ X ( g

) ˆ X ( ) g

ˆ X ( θ

Θ

= θ

θ ) 0 for (

b

ˆ) (g Bθ

(7)

The normal model: reminder

Normal model: X

1

, X

2

, ..., X

n

are a sample from distribution N( µ , σ

2

). µ , σ unknown.

Theorem. In the normal model, and S

2

are independent random variables, such that

In particular:

X

) ,

(

~ N

2 n

X µ

σ

) 1 (

~

2

1 2

2

χ −

S n

n σ

) 1 2 (

Var and

, , 2 4

2 2

, = = −

S n S

Eµ σ σ µ σ σ

X n X

Eµ,σ = µ ,and Varµ,σ = σ 2

(8)

Estimator bias – Example 1

In a normal model:

is an unbiased estimator of µ :

is an unbiased estimator of µ : is biased:

bias:

= X

µ ˆ

µ µ

µ

µ σ µ σ

σ

µ

X = E X = E

=

X = n =

E

n n

i i

n

1 1

1 ,

, ,

ˆ ( )

1

ˆ

1

= X

µ

µ

µ

µ σ

σ

µ,

ˆ

1

( X ) = E

,

X

1

= E

5 ˆ

2

=

µ

2 for

eg

5 5

) (

ˆ

2 ,

,σ

µ =

µ σ

= ≠ µ µ =

µ

X E

E

µ µ ) = 5 − (

b

any model with unknown mean µ:

(9)

Estimator bias – Example 1 cont.

is a biased estimator of σ

2

:

is an unbiased estimator of σ

2

:

=

=

n

i i

n

X X

S

1

1 2

2

( )

ˆ

(

2 2 2

)

2 2

1

2 2

, 1

1 1 2

, 2

,

2

2 )

( )

(

) (

) (

) ˆ (

σ σ

µ σ

µ σ σ

σ µ σ

µ σ

µ

= +

− +

=

=

=

=

n n n

n i n

i i

n

n n

X n X

E X

X E

X S

E

=

=

n

i i

n

X X

S

1

2 1

2 1

) (

(

2 2 2

)

11

(

2

)

2

1 1

2 2

1 , 1 1

2 1

1 ,

2 ,

) 1 (

) (

) (

) (

) (

) (

2 σ σ

µ σ

µ σ

σ µ σ

µ σ

µ

=

= +

− +

=

=

=

=

∑ ∑

n n

n

X n X

E X

X E

X S

E

n n n

n i n

i i

n

not necessarily the normal model!

(10)

Estimator bias – Example 1 cont. (2)

Bias of estimator is equal to

for n → ∞, bias tends to 0, so this estimator is also OK for large samples

=

=

n

i i

n

X X

S

1

1 2

2

( )

ˆ

b n

2

)

( σ

σ = −

for any distribution with a variance

(11)

Asymptotic unbiased estimator

An estimator of g( θ ) is asymptotically unbiased, if

0 )

( lim

: =

Θ

∀ θ

b θ

n

)

ˆ X (

g

(12)

How to compare estimators?

We want to minimize the error of the estimator; the estimator which makes smaller mistakes is better.

The error may be either + or -, so usually we look at the square of the error (the

mean difference between the estimator

and the estimated value)

(13)

Mean Square Error

If is an estimator of θ :

Mean Square Error of estimator is the function

If is an estimator of g( θ ):

MSE of estimator is the function

We will only consider the MSE. Other measures are also possible (eg with absolute value)

) ˆ X ( θ

))

2

( )

ˆ ( ( ˆ )

,

( θ g E

θ

g X g θ

MSE = −

)

2

) ˆ (

( ˆ )

,

( θ θ = E

θ

θ X − θ

MSE

) ˆ X ( g

) ˆ X (

θ

)

ˆ X (

g

(14)

Properties of the MSE

We have:

For unbiased estimators, the MSE is equal to the variance of the estimator

ˆ ) ( Var )

( ˆ )

,

( g b

2

g

MSE θ = θ +

(15)

MSE – Example 1

X

1

, X

2

, ..., X

n

are a sample from a distribution with mean µ , and variance σ

2

. µ , σ unknown.

MSE of (unbiased):

MSE of (unbiased):

MSE of (biased):

= X

µ ˆ

X n Var

X E

X MSE

2 ,

2

,

( )

) ,

,

( σ

µ σ

µ =

µ σ

− =

µ σ

=

1

ˆ

1

= X

µ

5 ˆ

2

=

µ

2 1

, 2

1 ,

1

) ( )

, ,

( µ σ X = E

µ σ

X − µ = Var

µ σ

X = σ

MSE

2 2

,

( 5 ) ( 5 )

) 5 , ,

( µ σ = E

µ σ

− µ = − µ

MSE

(16)

MSE – Example 2 Normal model

MSE of

MSE of = ∑

n=

i i

n

X X

S

1

1 2

2

( )

ˆ

=

=

n

i i

n

X X

S

1

2 1

2 1

) (

1 ) 2

( )

, , (

4 2

, 2

2 2

, 2

= −

=

= E S Var S n

S

MSE σ

σ σ

µ

µ σ µ σ

4 2

4 2

2 2

4

2 ,

2 2

2 2

, 2

1 2

1 2

) 1 (

) ˆ ( ˆ )

( ˆ )

, , (

σ σ σ

σ σ

σ

µ

µ σ µ σ

n n n

n n n

S Var

b S

E S

MSE

= −

− + −

=

+

=

=

ˆ ) , , ( )

, ,

( S2 MSE S2

MSE

µ σ

>

µ σ

in any model: similarly, just with different expressions

(17)

MSE and bias – Example 2.

Poisson Model: X

1

, X

2

, ..., X

n

are a sample from a Poisson distribution with unknown parameter θ .

ML

= ... = X

θ ˆ

0 )

( θ = b

X n X

X

MSE

n

i i

n

θ =

θ

=

θ

=1

= θ

Var

1

Var )

,

(

(18)

Comparing estimators

is better than (dominates) , if

and

an estimator will be better than a different estimator only if its plot of the MSE never lies above the MSE plot of the other estimator; if the plots intersect, estimators are

incomparable

) ˆ

1

( X

g g ˆ

2

( X )

ˆ ) , ( ˆ )

, (

MSE θ g

1

MSE θ g

2

θ ∈ Θ ≤

ˆ ) , ( ˆ )

, (

MSE θ g

1

MSE θ g

2

θ ∈ Θ <

(19)

Comparing estimators – cont.

A lot of estimators are incomparable →

comparing any old thing is pointless; we need to constrain the class of estimators If we compare two unbiased estimators,

the one with the smaller variance will be

better

(20)

Comparing estimators – Example 1.

In any model:

From among

is better (for n>1)

are incomparable, just like

From among is better

1

ˆ

1

and

ˆ = X µ = X

µ µ ˆ

5 ˆ

and

ˆ = µ

2

=

µ X

5 ˆ

and

ˆ

1

=

1

µ

2

=

µ X

2 2

and S ˆ S

ˆ

2

S

(21)

Minimum-variance unbiased estimator

We constrain comparisons to the class of unbiased estimators. In this class, one can usually find the best estimator:

g*(X) is a minimum-variance unbiased estimator (MVUE) for g( θ ), if

g*(X) is an unbiased estimator of g(

θ

),

for any unbiased estimator we have for

θ

∈Θ

) ˆ X ( g

) ˆ (

) (

* X Var g X g

Var

θ

θ

(22)

How can we check if the estimator has a minimum variance?

In general, it is not possible to freely minimize the variance of unbiased

estimators – for many statistical models there exists a limit of variance

minimization. It depends on the

distribution and on the sample size.

(23)

Cytaty

Powiązane dokumenty

If an estimator is asymptotically normal, then for large samples the distribution of this estimator is approximately normal, meaning that for further calculations which we would like

The n × n matrix has a determinant which is the generalization of this rule of alternating sums determinant of such submatrices, mutiplied by the entry that is in the row and

Teaching Quality Assessment Team Faculty of Letters, University of Wrocław Available

STUDENTS OF THE FACULTY OF LETTERS: ENGLISH-LANGUAGE MAJORS Second-year MA students of English-language majors will attend the Entrepreneurial Skills: Work, Business, Career course

Tania is the ……….got top marks for the projects he’d done.. There is no milk in

Fill- in proof sheet 6 on the CD-ROM, ‘Remainder theorem’, shows you that the same reasoning can be applied when dividing any polynomial by a linear factor.. Th is leads us to

More- over, our results and methods used in the proof suggest that in the class of bounded pseudoconvex complete Reinhardt domains the symmetry of the Green function is equivalent

In this paper, we survey sufficient conditions for the existence of kernels in the closure of edge coloured digraphs, also we prove that if D is obtained from an edge