• Nie Znaleziono Wyników

ON BIAS OF LS ESTIMATORS OF THE PARAMETERS IN AUTOREGRESSIVE MODEL AR(2)

N/A
N/A
Protected

Academic year: 2021

Share "ON BIAS OF LS ESTIMATORS OF THE PARAMETERS IN AUTOREGRESSIVE MODEL AR(2) "

Copied!
22
0
0

Pełen tekst

(1)

ISSN 2083-8611 Nr 344 · 2017 Informatyka i Ekonometria 12

Ewa Rambally Burman University Division of Science erambally@burmanu.ca

ON BIAS OF LS ESTIMATORS OF THE PARAMETERS IN AUTOREGRESSIVE MODEL AR(2)

Summary: In this article, presented the formula for bias of the conditional LS estimator of the parameters of autoregressive model of lag two with null constant parameter based on four observations. It has been shown that the crucial role in determining the bias of LS estimators in AR(2) models plays the random variable = , where and are constants and is a random term of the autoregressive equation. The formula for the bias of the LS estimators is supplemented by its approximation with the boundary of the error of approximation.

Keywords: Autoregressive process; Least-square estimator; Bias of the LS estimator.

JEL Classification: C22, C13.

Introduction

Economies which, similarly to Polish economy, undergo period of econo- mic changes are characterized by political and economic instability. This transla- tes into instability of time series recorded for many macro- and microeconomic indicators. In addition, leading economies react to political and economic deci- sions and events which often is expressed in changes in the record of time series of economic indicators.

In such situations estimation of parameters of econometric models without consideration of breakpoints, as well as with incorrect specification, leads to models giving prognoses that can be strongly biased. These dangers apply to autoregressive models as well.

(2)

Ewa Rambally 98

Autoregressive models are studied for they are useful in prediction in many areas of science, from meteorology and biology to social sciences, particularly in exploring economic phenomena. Many economics time series, in particular ma- croeconomic indicators, can be modelled using autoregressive models. Of spe- cial interest are properties of parameter estimates of autoregressive models, especially due to their bias. Both, the Yule Walker estimators, as well as the least square estimators of the parameters of autoregressive models are biased.

This topic of biasness of such estimators has been discussed and investigated, often as a factor in other areas of research, by many authors, i.e. Hurwicz [1950]

– AR(1), Marriott and Pope [1954] in models with single autoregressive coeffi- cient, Kunimoto and Yamamoto [1985] – representation for bias in the case when the selected model does not coincide with the true models, first order auto- regressive models were investigated by A. Le Breton and D. Pham [1989], Mar- riott and Pope [1954], Hurwicz [1950], White [1961], Shenton and Johnson [1965], models with maximum lag = 2 were investigated by Tanaka [1984], Yamamoto and Kunimoto [1984], Tjørstheim and Paulsen [1983]. Yamamoto and Kunimoto [1984] explored the case of = 3. In more direct approach Tjøstheim and Paulsen [1983], used Tylor series expansion from an estimator expressed as a function of correlation estimators, and the same approach was used in the vector autoregressive models by Yamamoto and Kunimoto [1984].

The last four mentioned authors explored the relationship of the coefficients of the process to the roots of the associated characteristic equations. Shaman and Stine [1988] gave the exact formulas for bias of the Yule-Walker, as well as the least squares estimators of autoregressive models, with known and with unk- nown mean. Most of the mentioned authors use the autocorrelation and spectral density as the basis of their analysis

It has been shown that as the sample size increases indefinitely the distribu- tion of the LS estimates of the parameters of autoregressive models approaches normal distribution and the biasness vanishes to zero [Shumway, Stoffer, 2006;

Lai, Wei, 1983]. This neat property has no use in cases when the number of observation is small or estimation has to be performed with small number of observations due to structural breaks.

Research presented in this paper arose in the context of research on deter- mining breakpoints in macroeconomic time series, as it requires knowledge of small sample properties of parameter estimates of autoregressive model [Kwiat- kowska, 2002; 2004]. It is a small part of a larger project which aims to determi- ne statistical properties of an algorithm that allows to specify an autoregressive model, as well as simultaneously determine breakpoints without restrictions on

(3)

the number of such breakpoints (such restrictions are common in the literature).

The project will be completed, if relevant statistical tests are derived. One of the steps of this project is derivation of the statistical properties of the parameter estimates in autoregressive models. Such estimators recorded as vector sequen- ces and occur to be small sample LS estimators. A natural generalization of this paper would be determining of such bias for models with of higher than two maximum lag and eventually for models with an arbitrary and known maximum lag.

In this paper, we considered discussing autoregressive model of lag two, with a constant term being equal to zero. The focus is on estimating the bias of the LS parameter estimates of such model. Considered samples of size four only which reflects directly the fact that such sample is used to solve relevant systems of equations leading to determine the estimator. Similarly, as in the direct appro- ach in Hurwicz [1950], the author used the approach in which considered an autoregressive model without its mean and estimation is derived based on the original data series, without considering the values of autocorrelations. All derivations are made under the standard assumption that the random components are independent and normally distributed with mean zero and standard deviation . The choice of small sample is justified by theorem 1 in this paper, and related method of determining breakpoints in time series proposed in Kwiatkowska [2002; 2004]. The topic undertaken in this paper is thus a natural continuation of a development of algorithm for simultaneous determination of breakpoints and model maximum lag, and is specific to autoregressive models. Most, if not all methods for determining breakpoints in econometric models address the problem in models with exogenic variables only. Such methods are then applied in auto- regressive models, but not always the properties of such methods in case of exi- stence of endogenic variables are known. The method, towards which the author works in this paper, is designed specifically for autoregressive models.

1. Introduction of the problem

Consider the autoregressive model of the maximum lag = 2 with no con- stant term and with constant regression coefficients in the form:

= + + , (1) where , = 1,2, … , , denotes random variables, respectively their obse- rved values and are independent normal random variable with mean zero, and of unknown constant variance for all = 3, 4, … , . Assumed that known are the two initial observations , .

(4)

Ewa Rambally 100

Assumption 1. Assume that the sequence ( : = 3,4, … ) are the observations generated by model (1).

For each t the (small sample) the vector estimator = of the coefficients vector = , ′ is defined as the solution of the system of equations:

= +

= +

and its notation is defined as follows: = ( | = , = , … ).

Denote as follows:

= = ′, Y t= .

It is easy to show that this estimator is a LS estimator. Namely, in the case of our model, the LS estimator has the form:

= ( ′ ) ′ Y t= ′ ′ Y t= Y t= .

Moreover, the following theorem holds [Kwiatkowska, 2002; 2004 – rephrased]:

Theorem 1. In the deterministic case, where = 0, the original time series is generated by equation (1) with constant coefficient if and only if = , for all = 4,5, … , . This is true for any natural value of maximum lag p.

Of our interest is to determine the statistical properties of this estimator un- der limited conditions, namely, to determine statistical properties and . In this case, we have the following equalities:

= = + . (2)

Under assumption 1, and based on theorem 1, matrix is invertible with probability one. Since = 0 if and only if = − (discrete) and

= has normal distribution.

After multiplying these equations by the inverse of we have:

− = = =

= + ,

where = ( | = , = ) = + is an unk-

nown, due to unknown vector , but deterministic value.

(5)

From the above equalities, we obtain:

− = 1

− + −

− +

and therefore the coordinates of the above vector can be expressed as follows:

, − = ,

, − = .

Of our interest is to determine the bias of , therefore, we will be exploring expected value of the above differences.

For simplicity of notation denoted as follows:

= , = − , = − , = . (3) First consider:

, − = = 1 − + .

For which the expected value is:

, − = 1 − 1

1 + +

+ =

= 1 + ( − 1) 1

1 + = 1 − 1

1 + and finally:

, − = 1 − , (4) as , are stochastically independent. The second coordinate takes the fol- lowing form:

, − = + +

+ =

+ +

+ +

+ and therefore its expected value is as follows:

, − = ( )+ ( )+ .

(6)

Ewa Rambally 102

This gives further:

, − = − 1

+ + + − 1

+

and therefore under assumptions of the random variables and , and based on the properties of expectation, we have:

, − = + − − 1

+ =

= − 1

+ and finally obtaining:

, − = − . (5) From (4) and (5) it is evident that bias of depends stochastically only on

, and this expected value is to be investigated next.

2. Preliminary outcomes

Assume that ( ) is a density function of normal distribution with mean ze- ro and standard deviation , that is ( )=

exp − . Note that the ran- dom variable = is undefined when = − , and near this point it attains very high or very low values without boundaries.

In determining the expectation of the function ( ) of the of a random var- iable with distribution density function ( ) we will use the following useful classic identity [Ross, 2010]:

( ) = ( ) ( ) .

The integral of the expectation of interest is an improper integral of both type I and type II, therefore, it must be presented as follows:

1

+ = 1

+ ( ) + 1

+ ( ) =

(7)

= 1

+ ( ) + 1

+ ( ) +

+ ( ) ( ) ,

where , > 0 is an arbitrary constant. Denote:

( ): = ( ) ,

( ): = ( ) + ( ) ,

( ): = ( ) .

Consider the integral ( ). Denote ( ): = . The graph of the function

= ( ) is symmetrical about the point − , 0 , therefore for any real value , ≠ − , − + = − (− − ). Function ( ) is symmetrical about the y-axis, that is (− ) = ( ):

( ) = −1

+ ± 2 − + 1

+ ( ) =

= ( ) − ± 2 − .

The graph below represents the idea of using translation and reflection to pro- duce the above formula for ( ).

(8)

1

F

S

D

W

B r

o 1

2 104

Fig.

Sour

Def

We

Bas read

on t 1) i

= 2) i

= 4

. 1.

ce: O

fine

can

sed der

the if –

= if –

= App den

Own

e:

n re

on in t

inte –

prox nsity

rese

epre

the the

erv 0

0

xim y fun

2 earch

esen

e d nee

al 0,

0,

mate ncti h.

nt:

defin ede

2

gra ion

niti ed c

, 0 (

2 0 (

2

aphs

on case

(or 2

(or 2

s of (d

of e:

dark 2

:

abs

, w 0 0;

0 0;

E

k blu )

solu

we 0,

0,

*

Ewa

ue)* (lig

ute

, nee

***

a Ra

**, s ght b

va

, , ed t 0 ,

0 , amb

(re shift blue

lue

to c , th

, th ball

ed) a ted n e)***

, he

cons hen

hen

*

ly

and norm

*

2

ere

side

mal

2

qu

0 er th

0

l den

uote

0 , he f

(b nsit

.

. ed f

foll

*

row ty fu

for

low

**

wn)* unct

con

wing

of tion

nven

g ca the n

nien

ses nor

nce

: 2

2 rma

e of al

(6

(7 f th 6)

7) he

(9)

3) if – < 0, > 0 (or > 0, > 0), then ℎ( ) = ( ) − + 2 − =

= ( ) − + 2 ≥ 0;

4) if – > 0, < 0 (or < 0, > 0), then ℎ( ) = ( ) − − 2 − =

= ( ) − + 2 ≤ 0.

Continuing under the assumption that ∈ − , − + , the above cases give the following outcomes about the integral ( ) respectively.

Preposition 1. For every > 0 and for every ∈ − , − + : 1) If > 0, < 0, ℎ( ) ≤ 0, ( ) > 0, then ( ) < 0;

2) If < 0, < 0, ℎ( ) ≥ 0, ( ) < 0, then ( ) < 0;

3) If > 0, > 0, ℎ( ) ≥ 0, ( ) > 0, ℎ ( ) > 0;

4) If < 0, > 0, ℎ( ) ≤ 0, ( ) < 0, ℎ ( ) > 0.

Proof. The proof is directly obtained from (7) and properties of integrals.

Due to symmetries of the above cases the value of the integral ( ) in case 1. and 2. is the same, as well the value of the integral ( ) in cases 3. and 4. is the same for any real value > 0.

In the preposition below, it is shown that the value of the integral ( ) for cases 1. and 2. differs from the value of this integral in cases 3. and 4. only by the sign.

Preposition 2. Let ( , , ) = ( )ℎ( ) .

Assume > 0. Consider ( , , ) witℎ > 0, > 0, ( , , ), with =

= − < 0, ( , , ), with = − < 0, ( , , ), with d and b are as earlier.

Then ( , , ) = ( , , ) > 0 and ( , , ) = ( , , ) = − ( , , ) <

< 0.

Proof. We will show one of the given equalities. In the similar manner the re- maining equalities can be proven. Consider and arbitrary value 0 ≤ ≤ , then the number – + ∈ − , − + and the value of the function ( )−

− + 2 at − + . We have the following equalities:

( ) − + 2 = − + − − + + 2 =

= − + − + = + − − + =

(10)

Ewa Rambally 106

= − + + 2 − − + = + 2 − ( ) .

Thus:

( ) − + 2 = − ( ) − + 2 .

In addition:

= = .

Therefore, the values of the functions under the integrals, ( , , ), ( , , ) at each point distant from − and − respectively by and arbitrary > 0 differ only by sign. Thus, ( , , ) = − ( , , ).

In addition ( , , ) > 0, because in this case both ( ) > 0 and ℎ( ) > 0 for all ≥ − .

In conclusion: to determine ( ) in any of the four cases expressed in preposition 1., it is sufficient to determine ( ) of the cases, say for > 0, > 0, and then appropriately adjust the sign.

In the analysis to follow we will consider the case for > 0, > 0.

Assume > , > . Under the assumption of normal distribution (0, ) of the random variables , the direct form of the difference ( )− + 2 is derived as follows:

ℎ( ) = 1

√2 exp −

2 − 1

√2 exp − + 2

2 and therefore:

ℎ( ) =

exp − 1 − exp ( + ) . (8)

For convenience of notation denote = . Under the assumption of > 0,

> 0, we have < 0.

Based on (8), the integral ( ) can be expressed as follows:

( ) = 1

√2

1

+ exp −

2 1 − exp ( + )

(11)

and finally:

( ) = ( ) 1 − exp ( + ) . (9) Observe that:

= lim ( ). (10) By substituting = + integral in (9) obtains the form:

( ) = (1 − exp ( )) . (11) Or in a more concise form:

( ) = ( ) , (12)

where ( )= ( ).

The integral (9) and therefore the integral (10) are not elementary integrals.

We can approximate them by using numerical methods, we can establish bound- aries for its value or evaluate using power series approach.

Below discussed series representation of ( ).

Preposition 3. Assume > 0, > 0:

( ) = − ( )

∙ !

and the series on the right hand side is and alternating, absolutely convergent series with the radius of convergence ∞.

Proof. Consider the fact [Bronsztejn, Siemiendiajew, 1996, p. 479, formula 451]

that:

= | | +

1 ∙ 1!+( )

2 ∙ 2! +( ) 3 ∙ 3! + ⋯ thus, we obtain:

( ) = 1

− exp( )

= | | − | | −

1 ∙ 1!−( )

2 ∙ 2! −( ) 3 ∙ 3! − ⋯

(12)

Ewa Rambally 108

and further, considering the definite integral and the explicit form of S,we have:

( ) = − ∑ ∙ ! = − ∑ ( ∙ !) . (13) The integral in (10) will be used in the second part of this paper.

Assuming that > 0, > 0, we have that < 0, and therefore the series in (13) this is an alternating series. Let =(| |∙ !) > 0 for > 0. Note that lim =( ) = 0. In addition, there exists a natural number , ≥

≥ 1 such that =| |( ) < 1 for all > and thus the sequence ( : >

> ) of all but some initial terms is decreasing. Based on the alternating series, test of the series is convergent for every value of q. Since lim = 0 the convergence is unconditional for every value of , > 0. This implies conver- gence of the sequence of partial sums ∑ ( )

∙ ! : > 0 for every > 0.

Therefore, the radius of convergence is infinity.

Preposition 4. Assume ≠ 0. Consider = lim ( , , ). Con- sider cases as in preposition 2. We have the following:

= > 0 = = − < 0.

Proof. The thesis is proven directly from preposition 2., the fact that

= lim ( , , ) and properties of the integral. In fact it is an extension of the preposition 2 to the improper integral lim ( , , ).

3. Expectation of and evaluation of the bias

First, we will investigate the boundaries for this integral, and subsequently, using the power series approach, this integral will be evaluated as → +∞, leading to evaluating of the expected value of the random variable = , and further the bias of the LS estimates of the parameters of model (1) when given four observations.

Theorem 2. If > 0, > 0 the following inequalities give boundaries for :

(13)

( ) ∙

< < ( )∙ + 1 − − + < ∞, (14) where ( ) is defined as in (10), and also:

0 < < − 1 − − , (15) where S is as defined by formula (8).

Proof. If , > 0 ( < 0) , the function is decreasing, therefore the inte- gral ( ) + ( ), where ( ) and ( ) are defined earlier, can be bounded as follows:

( ) + ( ) < 1

( ) = 1

1 − − + .

In such case, it is obtained that for any chosen 0 < < ∞:

0 < ( ) ∙ −

< 1

+ = ( ) + ( ) + ( ) <

< ( ) ∙ − +

+ 1

1 − − + < ∞

and the expression ( )∙ + 1 − − + provides upper bounda- ry for for any arbitrary chosen > 0.

On the other hand, we have as follows:

= lim

( ) .

Based the fact that 0 < ( ) < − , we have:

0 < < .

Since the integral = ( ) . Therefore:

0 < < − 1 − − , where, as defined before, = .

(14)

Ewa Rambally 110

The above theorem gives boundaries for and their value depends on . It would be interesting to investigate for what value of such bordering of

is most effective, that is it gives the shortest possible interval in which is located.

Another approach is to use the power series to evaluation of . In this approach we derive the exact formula for , which the bias of the random variable .

Theorem 3. Assume that random variable has normal distribution with mean zero and standard deviation . The expected value of the random variable =

= , where > 0, > 0 is given by the expression:

= −

!( , ∞) , (16) where = , and:

(2 − 1, ∞) = ∑ !

( )! (17) and:

(2 , ∞)=√2 ∏ (2 − 1) 1 − −

∑ ∏ (2 + 1 − 2 ) − (18) for all ∈ ℕ. In addition:

≈ −

!( , ∞) (19) and the error of approximation can be bounded as follows:

( ) ≤

| |

( )!( , ∞). (20)

Proof. Consider again the integral in (13):

1 1 − exp ( ) = − ( )

∙ ! .

Under the assumption that > 0, > 0, the interval of convergence for the above series is (−∞, ∞). Thus above the series converges uniformly to its limit.

(15)

In addition, using the fundamental theorem of calculus for uniformly convergent series we can differentiate and integrate term by term. So, the following is true:

1 − exp ( ) = − ∑ ( ∙ !) . This leads to the following sequence of equalities:

( )

= − ∑ ( !) = − ∑ ( !) .

By substituting = , we obtain:

( ): = ( )= − ∑ ( )! .

This is an alternating series with the interval of convergence (−∞, ∞). Based on the Alternating Series Remainder Estimates, we have:

( )

! − ( ) ≤| |

( + 1)!

and since ( )> 0, ∈ ℝ:

| |

( + 1)! ≤ − ( )

!

− ( )

| |( )! . Therefore:

( ) = ( )

( )

!

=

= − !

with the error of approximation:

( ) ≤ | |

( + 1)!

− .

Particularly, we obtain:

1

+ = lim

( ) = −

!

and using the Alternating Series Remainder Estimates the approximation of is:

(16)

Ewa Rambally 112

1

+ ≈ −

! −

and the error of this approximation is bounded as follows:

(∞) ≤ | |

( )! .

Each of the above formulas contains integrals of the form . Us- ing the substitution = + , the equivalent form of the integral is ( + + ) ( ) , applying the Binomial Theorem, and considering the explicit form of ( ), we finally have:

− = 1

√2 .

Therefore, the critical integral for evaluating all the above integrals is:

( ) =

and in the definite or improper versions of it are respectively:

( , ) = and ( , ∞) = .

Case 1. Odd integers = 2 − 1, ∈ ℕ. In this case, we have:

(2 − 1) = .

By substituting = and therefore = , it is obtained:

(2 − 1) = .

This integral can be evaluated by applying integration by parts times. Such approach leads to the following answer:

(2 − 1) = − ∑ !

( )! .

Which in terms of has the following form:

(2 − 1) = − !

( − )!

2

(17)

for positive odd integers, ∈ ℕ:

(2 − 1) = − !

( − )!

2 +

and further the definite integral of interest has the form:

(2 − 1, ) = =

= − ∑ !

( )! − − ∑ !

( )! + .

Taking into consideration that for every ∈ :

lim = 0, (21) the first term in the above integral vanishes as → ∞. We obtain the following indefinite integral:

(2 − 1, ∞) = = ∑ !

( )! + .

Case 2. Even integers = 2 , ∈ ℕ. In this case, we have:

(2 ) = .

Apply first the integration by parts with = and = . We have that = − and = , and therefore:

= + ( ) + .

By solving for the integral on the right hand side and substitution 2 = 2 + 2, we have the following recursive formula:

= (2 − 1) − + .

By applying this recursive formula times, we obtain the following form of integral (2 ):

(2 ) = ∏ (2 − 1) − ∑ (2 + 1 − 2 ) + .

(18)

Ewa Rambally 114

Now, for positive even integers = 2 , ∈ ℕ, the definite integral (2 , ) =

= and the improper integral (2 , ∞) = have the form respectively:

(2 , ) = √2 − + − − (2 − 1)

− (2 + 1 − 2 )

and:

(2 , ∞) = √2 1 − − (2 − 1)

− ∑ ∏ (2 + 1 − 2 ) − ,

which after applying (21) gives the expression as follows:

(2 , ∞) =

= √2 1 − − ∏ (2 − 1) − ∑ ∏ (2 + 1 −

−2 ) − .

In conclusion, for all ∈ ℕ we have as follows:

(2 − 1, ∞) = !

( − )!

2 =

= !

( − )!

4 =

and:

(2 , ∞) =

= √2 1 − − ∏ (2 − 1) − ∑ ∏ (2 + 1 −

−2 ) − .

(19)

Finally:

1

+ = − 1

√2 ! ( , ∞)

and for its approximation can be taken:

≈ −

!( , ∞) .

If, as assumed > 0, > 0, the error of the above approximation can be esti- mated by Alternating Series Remainder Estimates:

( ) ≤ | |( )! =(| | )! =

=

| |

( )!∑ ,

which, after considering the definition of ( , ∞), is as in the thesis of this

theorem.

Corollary 1.

, − = 1 − ( − ) , (22)

, − = + , (23)

where:

1

− + =

= −

! ∑ ( − ) ( , ∞) (24) and:

(2 − 1, ∞) = ∑ ( !)! , (25)

(2 , ∞) = √2 1 − ( )

(2 + 1 − 2 ) , (26)

where, following after formula (8), = −

.

(20)

Ewa Rambally 116

In addition:

1

− + ≈

≈ −

! ∑ ( − ) ( , ∞) (27) and the error of approximation can be bounded as follows:

( ) ≤

| |

( )!∑ ( − ) ( , ∞). (28)

Proof. The formulas in this theorem are obtained directly by substituting for and expressions from (3) to formulas (16)-(20).

Conclusion and direction of further research

Due to the form of in theorem 3, to evaluate this expected value it is necessary to evaluate the infinite series, which for computational purposes needs to be limited to the finite number of terms. Formulas (16) and (17) allow for such approximation. The bias of and depends on = + + , therefore it depends on the unknown values of parameters of interest.

Such dependence was obtained also by Shaman P. and Stine R. [1988] in deter- mining the bias of Yule-Walker and LS estimates of coefficients in autoregres- sive models with incorporated mean using the autocorrelation and spectral density.

The bias of the estimator determined in this paper depends also on the unknown variance of the random terms of model (1).

Further exploration of the above formulas should involve determining how substitution of some estimated of the parameters and in can be of help in estimating the bias, as well as, what impact on the bias have the second terms on the right hand side of the formulas (22)-(23), and estimation. Also, the boundaries for in (14) depend on an arbitrary value , > 0. It may be beneficial to investigate the value of q for which these boundaries give the most precise estimate of . Research presented in this paper include a special case of the maximum lag two. Further generalizations research should include higher lag order and inclusion of the constant term in model (1). Also, it is desired to conduct simulations of artificially derived time series as well on observed time series of economic indicators. A comparative analysis of present-

(21)

ed formula for biasness with methods existing in literature should also be con- sidered in the future exploration of the topic.

References

Bronsztejn I.N., Siemiendiajew K.A. (1996), Matematyka. Poradnik encyklopedyczny, Wydawnictwo Naukowe PWN, Warszawa.

Hurwicz L. (1950), Least Square Bias in Time Series [in:] Koopmans T.C. (ed.), Statisti- cal Inference in Dynamic Economic Models, John Wiley & Sons, New York, pp. 365-383.

Kunimoto N., Yamamoto T. (1985), Properties of Predictors in Misspecified Autore- gressive Time Series Models, “Journal of the American Statistical Associations”, No. 80, pp. 941-950.

Kwiatkowska E. (2002), Analiza zmian strukturalnych modeli autoregresyjnych dla wybranych zmiennych ekonomicznych, Wydawnictwo Akademii Ekonomicznej w Katowicach, Katowice.

Kwiatkowska E. (2004), Struktura i zmienność ekonomicznych szeregów czasowych, Doctoral Dissertation, University of Economics, Katowice.

Lai T.L., Wei C.Z. (1983), Asymptotic Properties of General Autoregressive Models and Strong Consistency of Least-squares Estimates of their Parameters, “Journal of Multivariate Analysis”, Vol. 13, Iss. 1, pp. 1-23.

Le Breton A., Pham D.T. (1989), On the Bias of the Least Squares Estimator for the First Order Autoregressive Processes, “Annals of the Institute of Statististical Ma- thematics”, Vol. 41, No. 3, pp. 555-563.

Marriott F.H.C., Pope J.A. (1954), Bias in the Estimation of Autocorrelations, “Biome- trika”, No. 41, pp. 390-402.

Ross S. (2010), A First Course in Probability, Prentice Hall, Upper Saddle River – New Jersey.

Shaman P., Stine R. (1988), The Bias of Autoregressive Coefficient Estimators, “Journal of the American Statistical Association”, Vol. 83, No. 403, pp. 842-848.

Shenton L.R., Johnson W.L. (1965), Moments of Serial Correlation Coefficient, “Journal of Royal Statistical Society”, No. 27, Series B, pp. 308-320.

Shumway R.H., Stoffer D.S. (2006), Time Series Analysis and its Applications with R Examples, Springer, New York et al.

Tanaka K. (1984), An Asymptotic Expansion Associated with the Maximum Likelihood Estimators in ARMA Models, “Journal of the Royal Statistical Society. Series B Methodological”, Vol. 46, No. 1, pp. 58-67.

Tjørstheim D., Paulsen J. (1983), Bias of Some Commonly-used Time Series Estimates,

“Biometrika”, No. 70(2), pp. 389-399.

(22)

Ewa Rambally 118

Yamamoto T., Kunimoto N. (1984), Asymptotic Bias of the Least Squares Estimator for Multivariate Models, “Annals of the Institute of Statistical Mathematics”, Vol. 36, pp. 419-430.

White J.S. (1961), Asymptotic Expansions for the Mean and Variance of the Serial Cor- relation Coefficient, “Biometrika”, Vol. 48, No. 1/2(Jun), pp. 85-94.

O OBCIĄŻENIU ESTYMATORA MNK PARAMETRÓW MODELU AUTOREGRESJI AR(2)

Streszczenie: W artykule przedstawiono wzór na obciążenie warunkowego estymatora parametrów modelu autoregresji rzędu drugiego z uwzględnieniem czterech obserwacji.

Wskazano na kluczową rolę zmiennej losowej = , gdzie a i c są stałymi warto- ściami, a jest składnikiem losowym równania autoregresji. Dodatkowo obciążenie estymatora parametrów autoregresji wyrażono w formie przybliżonej i podano wzór na błąd takiego przybliżenia.

Słowa kluczowe: proces autoregresji, estymator metody najmniejszych kwadratów, ob- ciążenie estymatora MNK.

Cytaty

Powiązane dokumenty

This means that the test we are using has almost no ability of distinguishing a value of p = 1 2 from a value of p = 0.51: if the true value of the parameter were p = 0.51, the value

Up till now we have dealt with number series, i.e. each series was a sum of infinitely many numbers. For each fixed x such a series is defined as the usual number series. However, it

STUDENTS OF THE FACULTY OF LETTERS: ENGLISH-LANGUAGE MAJORS Second-year MA students of English-language majors will attend the Entrepreneurial Skills: Work, Business, Career course

Theorem 5.1 Let the assumption of Theorem 4.1 are satisfied, then the initial value problem (3)-(4) has at least one positive nondecreasing solution x ∈ L 1.. Colombo, Extensions

We find that our model of allele frequency distributions at SNP sites is consistent with SNP statistics derived based on new SNP data at ATM, BLM, RQL and WRN gene regions..

In this section we shall present some considerations concerning convergence of recurrence sequences, and their applications to solving equations in Banach

We note that, at first glance, the results Carlitz achieves in [1] do not appear to be the same as Theorem 1 with α = 1.. It can be checked, however, that they are

We state the following theorem concerning a commutation relation, called Zharkovskaya’s relation, between Hecke operators and the Siegel op- erator acting on Siegel modular forms