BAYESIAN AND GENERALIZED CONFIDENCE INTERVALS ON VARIANCE RATIO AND ON THE VARIANCE COMPONENT IN MIXED LINEAR MODELS
Andrzej Michalski Department of Mathematics
Wroc law University of Environmental and Life Sciences Grunwaldzka 53, 50–357 Wroclaw, Poland
e-mail: andrzej.michalski@up.wroc.pl apm.mich@gmail.com
Abstract
The paper deals with construction of exact confidence intervals for the variance component σ
12and ratio θ of variance components σ
21and σ
2in mixed linear models for the family of normal distributions N
t(0, σ
12W + σ
2I
t). This problem essentially depends on algebraic structure of the covariance matrix W (see Gnot and Michalski, 1994, Michalski and Zmy´slony, 1996). In the paper we give two classes of bayesian interval estimators depending on a prior distribution on (σ
21, σ
2) for:
1) the variance components ratio θ - built by using test statistics obtained from the decomposition of a quadratic form y
0Ay for the Bayes locally best estimator of σ
21, Michalski and Zmy´slony (1996),
2) the variance component σ
21- constructed using Bayes point
estimators from BIQUE class (Best Invariant Quadratic Unbiased
Estimators, see Gnot and Kleffe, 1983, and Michalski, 2003).
In the paper an idea of construction of confidence intervals using gen- eralized p-values is also presented (Tsui and Weerahandi, 1989, Zhou and Mathew, 1994). Theoretical results for Bayes interval estimators and for some generalized confidence intervals by simulations studies for some experimental layouts are illustrated and compared (cf Arendack´ a, 2005).
Keywords: mixed linear models; variance components; hypothesis testing; confidence intervals; generalized p-values.
2000 Mathematics Subject Classification: 62F03, 62H10, 62J10.
1. Introduction We consider a mixed linear model
(1) y = Xβ + X
1β
1+ e,
where y is an (n x 1) observed vector, X is a known (n x q)-matrix of rank s, s ≤ q, X
1is a known (n x q
1)-matrix of rank s
1, s
1≤ q
1, β is a q-vector of parameters corresponding to fixed effects, while an unobservable random vector β
1and a vector of random errors e are stochastically independent and normally distributed with zero mean and the covariance matrix σ
12I
q1and σ
2I
n, respectively. Under these assumptions we obtain
(2) E(y) = Xβ , V ar(y) = σ
21V
1+ σ
2I
n, V
1= X
1X
10.
In this paper we restrict our attention to quadratic estimators y
0Ay which are invariant under the group of translations g(y) = y + Xβ, i.e., for which AX = 0. It can be checked that if B is (n − s) x n-matrix such that BB
0= I
n−s, B
0B = I − XX
+then t = By is a maximal invariant statistic under this group of translations. The model for t is as folows
(3) E(t) = 0 , Cov(t) = E(tt
0) = σ
21W + σ
2I
n−s, W = BV
1B
0.
Denote by α
1> α
2> ... > α
h−1> α
h= 0 the ordered sequence of different eigenvalues of W . Let W = P
hj=1
α
jE
jE
j0be the spectral decomposition
of W.
Following Olsen et al. (1976) let us consider the random vector Z = (Z
1, ..., Z
h−1, Z
h) where Z
i= t
0E
it/ν
i, for i = 1, ..., h−1, h and ν
1, ..., ν
h−1, ν
hare the multiplicities of α
0s. Under normality of y the random variables Z
i0s are stochastically independent, and ν
iZ
i/(α
iσ
12+ σ
2) are central chi-squared distributed with ν
idegrees of freedom, i = 1, ..., h. The model for Z = (Z
1, ..., Z
h−1, Z
h) is as follows
(4)
E(Z) = L(σ
12, σ
2)
0; L
0=
α
1... α
h1 ... 1
V ar(Z) = 2diag (α
iσ
12+ σ
2)
2/ν
i.
In Section 2 we use the results of point estimation for any function f
0σ = f
1σ
12+ f
2σ
2to construct possibly shortest exact Bayes confidence intervals on variance component σ
12. It follows from results of Seely (1970) that in the model (3) each function f
0σ is invariantly estimable iff matrices W and I are linearly independent or equivalently iff the number h of different eigen values of W satisfies: h ≥ 2.
In Subsection 2.2 and Section 3 we show that the problem of interval estimation for variance componnet σ
21or for variance ratio θ = σ
12/σ
2is also connected with testing the hypothesis about σ
12(5) H
σ: σ
12≤ σ
02vs K
σ: σ
12> σ
02or the hypothesis about θ
(6) H
θ: θ ≤ θ
0vs K
θ: θ > θ
0.
It is known fact, that in general, tests which have good statistical properties (most powerful tests and locally best tests) lead to good confidence intervals at fixed confidence level. In Section 3 we give a class of the Bayes confidence intervals on the variance ratio θ constructed by using test statistics F
A−+
(σ
∗)
obtained from the decomposition of a quadratic form t
0At for the Bayes
locally best estimator of σ
21at point σ
∗= (σ
1∗, σ
∗)
0(see Michalski and
Zmy´slony, 1996). Unfortunately, we can not directly construct the
confidence interval for variance component σ
21due to the presence of nuisance parameter σ
2. Therefore, a useful approach is to construct confidence interval on σ
12using generalized p-values and generalized test variables. In Section 4 we show the idea of generalized p-values and generalized test variables which was introduced by Tsui and Weerahandi (1989) and further developed by Weerahandi (1991, 1993).
2. Bayesian interval confidence
In this section we consider Bayes approach to construct possibly shortest confidence intervals on the variance component or the variance ratio using results for estimable functions f
0σ and test statistics derived from locally best invariant quadratic and unbiased estimators for variance component σ
12.
Definition 2.1. An estimator y
0Ay is Bayesian invariant quadratic and unbiased (BIQU) of f
0σ with respect to U = (u
ij)
i,j=1,2(or with respect to prior distribution τ , such that E
τσσ
0= U ), if A minimizes the Bayesian risk Var
τ(y
0Ay) in the class of symmetric and positive definite matrices that satisfied conditions: AX = 0, E(y
0Ay) = f
0σ.
Let U be a class of symmetric and positive definite with nonnegative elements matrices U . It is known (see e.g., Gnot and Kleffe, 1983, or Gnot, 1991) that a class U can be with accuracy to multiplication by constant characterized using two nonnegative parameters u, v as follows
U =
(
U = U
u,v=
"
u
2+ v u
u 1
#
, u, v ≥ 0 )
∪ U
0= U
u,∞=
"
1 0 0 0
# .
2.1. Bayesian interval estimators on σ
21We consider a class of admissible invariant quadratic and unbiased estimates
for a given function f
0σ = f
1σ
21+ f
2σ
2, which are Bayesian with respect to
prior distribution τ with E
τσσ
0= U . It follows from Gnot and Kleffe (1983) that for a given function f
1σ
12+ f
2σ
2the class A
IUof admissible invariant quadratic unbiased estimators in the model given by (3) coincides with the linear combinations of minimal sufficient statistics Z
ias follows
A
IU= (
ˆ
γ(u, v) =
h
X
i=1
(λ
1α
i+ λ
2)ν
iw
i(u, v)Z
i, u, v ≥ 0 )
∪ A
0,
where w
i(u, v) = (1 + 2uα
i+ (u
2+ v)α
2i)
−1, and class A
0consists of limiting estimates ˆ γ(∞) obtained as v tends to infinity which are Bayesian with respect to τ
0with U
0, i.e.,
A
0= (
γ(∞) = ˆ
h−1
X
i=1
λ
1ν
iα
iZ
i+ λ
2ν
hZ
h)
.
Here λ
1and λ
2are chosen such that ˆ γ(u, v) or ˆ γ(∞) is unbiased for f
1σ
12+ f
2σ
2, i.e.,
h
X
i=1
(λ
1α
i+ λ
2)w
i(u, v)α
iν
i= f
1h
X
i=1
(λ
1α
i+ λ
2)w
i(u, v)ν
i= f
2for ˆ γ(u, v), while for ˆ γ(∞) we have
λ
1rank(W ) = f
1and λ
1h−1
X
i=1
ν
iα
i+ λ
2ν
h= f
2.
Now, we give the construction of an exact 1 − p confidence interval for σ
12based on BIQUE of σ
12according to the following algorithm A1-5:
A1. Choose the BIQUE ˆ σ
12= P
hi=1
a
iZ
iof σ
12with respect to the prior distribution τ on (σ
12, σ
2) for given parameters u, v ≥ 0:
E (ˆ σ
12) = σ
21;
h
X
i=1
a
iα
i= 1 ;
h
X
i=1
a
i= 0.
A2. Calculate the variance of ˆ σ
12:
Var (ˆ σ
12) = 2σ
41h
X
i=1
a
2iν
i(1/θ + α
i)
2.
A3. Determine the exact probability distribution of ˆ σ
12:
ˆ σ
12∼ σ
21h
X
i=1
a
iν
i(1/θ + α
i)χ
2νi,
so that for fixed σ
12and for each θ ∈ (0, ∞) we have ˆ
σ
21σ
21∼
h
X
i=1
a
iν
i(1/θ + α
i)χ
2νi=
h
X
i=1
( √
2/2)SE
θ(a
iZ
i)χ
2νi,
where SE(·) determines the standard error of an estimator.
A4. Find the quantiles C
p1and C
p2from the distribution of quadratic form Q(θ) = P
hi=1
b
i(θ)χ
2νiwhere b
i(θ) = (a
i/ν
i)(1/θ + α
i), such that P r
C
p1(θ) ≤ σ ˆ
12σ
12≤ C
p2(θ)
= 1 − p
1− p
2= 1 − p or
P r n ˆ
σ
12/C
p2(θ) ≤ σ
21≤ ˆσ
21/C
p1(θ) o
= 1 − p
1− p
2= 1 − p.
A5. Optimal choice of (C
p1(θ), C
p2(θ))
(A5.1) choose the optimal pair (C
p∗1(θ), C
p∗2(θ)) for fixed θ:
(C
p∗1(θ), C
p∗2(θ)) = Arg
Cp1 ,Cp2
min
p1+p2=p
(1/C
p1(θ) − 1/C
p2(θ))
,
(A5.2) find θ
?for θ ∈ (0, ∞):
θ
∗= Arg
θ∈(0,∞)
max (1/C
p∗1(θ) − 1/C
p∗2(θ))
.
The maxmin (1 − p) confidence interval on variance component σ
12con- structed using the above algorithm as [ˆ σ
12/C
p2(θ
∗) , ˆ σ
12/C
p1(θ
∗) ] guarantees good surroundings of the point estimator of σ
12and gives a sort of protection against the worst possible scenario.
2.2. Bayesian interval estimators on θ = σ
21/σ
2Let us consider locally best unbiased quadratic invariant (LBUQI) estimator of σ
12at point σ
∗= (σ
1∗, σ
∗)
0. To present a convenient form of this estimator let us define a random variable K as follows
P r{K = α
j} = ν
j/c(σ
1∗α
j+ σ
∗)
2, j = 1, 2, ..., h and
c =
h
X
j=1
ν
j/(σ
∗1α
j+ σ
∗)
2.
From Lemma 4.1 (Michalski and Zmy´slony, 1996) we have that a LBUQI estimator of σ
21at point σ
∗= (σ
∗1, σ
∗)
0is t
0A
∗t, where
(7) A
∗= [cVar(K)]
−1h
X
j=1
α
j− E(K)
(σ
∗1α
j+ σ
∗)
2E
j,
while E(K) and Var(K) denote the expectation and variance of K, respec- tively. Hence we have the following decomposition of A
∗= A
∗+− A
∗−, where
A
∗+= [cVar(K)]
−1X
αj>E(K)
α
j− E(K) (σ
1∗α
j+ σ
∗)
2E
j,
A
∗−= [cVar(K)]
−1X
αj<E(K)
E(K) − α
j(σ
1∗α
j+ σ
∗)
2E
j.
Now, let us consider the test statistics in the following form
(8) F
A+−
= t
0A
∗+t t
0A
∗−t =
X
αj>E(K)
α
j− E(K) (σ
1∗α
j+ σ
∗)
2t
0E
jt X
αj<E(K)
E(K) − α
j(σ
1∗α
j+ σ
∗)
2t
0E
jt
The probability distribution of the ratio statistics for each fixed σ = (σ
12, σ
2)
0is as follows
(9) F
A+−
= t
0A
∗+t t
0A
∗−t ∼
X
αj>E(K)
α
j− E(K)
(σ
∗1α
j+ σ
∗)
2(σ
21α
j+ σ
2)χ
2νjX
αj<E(K)
E(K) − α
j(σ
∗1α
j+ σ
∗)
2(σ
21α
j+ σ
2)χ
2νj.
or for each θ
(10) F
A+−
= t
0A
∗+t t
0A
∗−t ∼
X
αj>E(K)
α
j− E(K)
(σ
∗1α
j+ σ
∗)
2(θα
j+ 1)χ
2νjX
αj<E(K)
E(K) − α
j(σ
∗1α
j+ σ
∗)
2(θα
j+ 1)χ
2νj.
The test statisic given by (8) can be used for testing hypothesis given by (6) and for a construction of the confidence interval on variance ratio θ. To build an exact confidence interval on θ we consider test statistics given by (8) as a function of the parameter θ, namely
(11) F
A+−
(θ) = X
αj>E(K)
α
j− E(K)
(σ
1∗α
j+ σ
∗)
2(θα
j+ 1) t
0E
jt X
αj<E(K)
E(K) − α
j(σ
1∗α
j+ σ
∗)
2(θα
j+ 1) t
0E
jt .
It is not difficult to show that F
A+−
(θ) is a strictly convex and decreasing function of θ ∈ (−1/α
1, ∞). Next, for each fixed θ the probability distribu- tion of F
A+−
(θ) is the same as the distrubution of the ratio of positive linear combinations of independent central chi-squared random variables, i.e.,
(12) F
A+−
(θ) ∼
κ
X
j=1
b
jχ
2νjh
X
j=κ+1
b
∗jχ
2νj,
where
b
j= α
j− E(K)
(σ
∗1α
j+ σ
∗)
2for j = 1, ..., κ : α
j> E(K) and
b
∗j= E(K) − α
j(σ
1∗α
j+ σ
∗)
2for j = κ + 1, ..., h : α
j< E(K).
Now, in order to construct the 1 − p confidence intervals (p
1+ p
2= p) we must find the critical values C
p1and C
p2such that
(13) P r
C
p1≤
κ
X
j=1
b
jχ
2νjh
X
j=κ+1
b
∗jχ
2νj≤ C
p2
= 1 − p
1− p
2or equivalently
(14)
P r
κ
X
j=1
b
jχ
2νj−
h
X
j=κ+1
b
∗jC
p1χ
2νj> 0
= 1 − p
1P r
h
X
j=κ+1
b
∗jC
p2χ
2νj−
κ
X
j=1
b
jχ
2νj> 0
= p
2.
Thus the solutions C
p1and C
p2of (13 or 14) are based on the distribution of quadratic forms Q = P
mi=1
λ
iχ
2νi(for details see e.g., Imhof, 1961, Davies, 1980). Next, we must solve following nonlinear equations
(15) F
A+−
(¯ θ) = C
p1and F
A+−
(θ) = C
p2,
where F
A+−
(θ) is determined by (11) and 0 < C
p1, C
p2≤ F
A+−
(0). The solution is ensured by the properties of F
A+−
(θ). From equations given by (15) we obtain the exact 1-p confidence interval on variance ratio θ as follows
[θ , ¯ θ] =
F
−1A+−
(C
p2) , F
−1A+−
(C
p1)
.
Finally, to obtain the shortest 1 − (p
1+ p
2) confidence interval we have to allocate properly the mass probability in the left tail (p
1) and in the right tail of the distribution (p
2). Thus for a choice of the pair (p
01, p
02) we obtain the confidence interval on θ at fixed confidence level 1 − (p
1+ p
2) whose its length l(p
1, p
2) satisfies
l(p
01, p
02) = min
p1,p2 p1+p2=p
F
−1A+−
(C
p1) − F
A−1+−
(C
p2)
.
Remark 2.1. The test statistic given by (8) was used by Michalski and Zmy´slony (1996) for testing hypothesis H
σ: σ
21= 0 versus K
σ: σ
12> 0.
Using the Bayes locally best unbiased quadratic invariant estimator of σ
12at the point (σ
1∗, σ
∗) = (0, 1) which is MINQUE for σ
12, the test statistic can be expressed as follows
F
A+−
= t
0A
∗+t t
0A
∗−t =
X
α∗j>0
α
j∗ν
jZ
jX
α∗j<0
α
j∗ν
jZ
j,
where α
∗i= α
i− trace(W )/rank(W ).
3. Generalized confidence interval on σ
21Following Tsui and Weerahandi (1989) we present in this section the con- cept of generalized p-value and generalized test variables which was further developed by Weerahandi (1991, 1993). The concept has been proposed to some testing problems where nuisance parameters are present and it is difficult or impossible to obtain a rational test at a fixed significance level.
3.1. The idea of constructing a generalized confidence interval Let X be a random vector with a cumulative distribution function F (x, ϑ), where a vector ϑ = (ξ, δ) represents unknown parameters: ξ is a scalar parameter of interest, δ is a nuisance parameter (scalar or vector). We are interested in testing
H
ξ: ξ ≤ ξ
0against K
ξ: ξ > ξ
0.
Suppose it is difficult or impossible to find a test statistic T (X) whose distri- bution at ξ
0is independent of the nuisance parameter δ and thereby we can not determine an appropriate critical region for a given significance level.
We then consider a random variable T (X; x, ϑ), which also depends on the observed value and the parameters.
Definition 3.1. If a function T (X; x, ϑ), where ϑ = (ξ, δ) satisfies the
following conditions:
(i) for fixed ξ, the distribution of T (X; x, ϑ) does not depend on δ for each x,
(ii) the observed value of T (X; x, ϑ) (i.e., t
obs= T (x; x, ϑ)) does not depend on unknown parameters,
(iii) for fixed x and δ, T (X; x, ϑ) is stochastically increasing in ξ i.e., P r{T (X; x, ϑ) ≥ t}is nondecreasing in ξ it is called a generalized test variable.
Condition (iii) implies, that a generalized test variable orders the sample space and thus can be effectively used for finding a critical region in testing H
ξ: ξ ≤ ξ
0vs K
ξ: ξ > ξ
0in the following form
C(x, ϑ) = {X : T (X; x, ϑ) ≥ T (x; x, ϑ)}
and the generalized p-value for testing H
ξagainst K
ξis p(x) = sup
ξ≤ξ0
P r{X ∈ C(x, ϑ)|ξ} = sup
ξ≤ξ0
P r{T (X; x, (ξ, δ)) ≥ t
obs|ξ}
= P r{T (X; x, (ξ
0, δ)) ≥ t
obs|ξ}.
Condition (iii) guarantees that the expressions above for critical region are equal and thanks (i) and (ii) the generalized p-value is computable. Condi- tion (iii) also implies that P r{T (X; x, ϑ) ≥ t
obs|ξ} becomes large as (ξ − ξ
0) increases. Besides, larger p-values favour the null hypothesis , the smaller p-values favour the alternative, and so tests based on generalized p-values re- ject the null hypothesis for small values p(x), similarly to classical p-values.
Next, having determined a generalized critical region for testing H
ξ: ξ ≤ ξ
0versus K
ξ: ξ > ξ
0we define a power function of the test based on data.
Definition 3.2. Function π(x, ξ) = Pr{X ∈ C(x, (ξ, δ))|ξ} is called a data- based power function if holds:
(a) π(x, ξ
0) = p(x),
(b) for each fixed x, π(x, ξ) ∼ R(0,1) (an uniform random variable on (0,1), for an arbitrary ξ),
(c) for each fixed x, π(x, ξ) is a monotonic function of ξ.
Properties (a) and (b) guarantee the power function π(x, ξ) can be used for the construction of a confidence interval on the parameter ξ. For π
1, π
2∈ (0, 1) such that π
2− π
1= 1 − p and a given observed x it holds:
Pr{π
1≤ π(x, ξ) ≤ π
2} = 1 − p.
Finally, by inversion we get a (1 − p) generalized confidence interval on the parameter ξ.
3.2. Generalized test variables
Let us consider the model given by (3) and problem of testing the hypothesis H
σ: σ
21≤ σ
02vs K
σ: σ
12> σ
02with an arbitrary nonnegative σ
20. The minimal sufficient statistics for the family of normal distributions of maximal invariant t = By, i.e., for N
t(0, σ
21W + σ
2I
t) (see sec. 1) are U
1= t
0E
1t, ..., U
h−1= t
0E
h−1t, U
h= t
0E
ht, such that, S
1= U
1/(α
1σ
12+ σ
2) ∼ χ
2ν1, . . . , S
h−1= U
h−1/(α
h−1σ
21+σ
2) ∼ χ
2νh−1and S
h= U
h/σ
2∼ χ
2νh. First, we consider the case h = 2, i.e., when a matrix W has only two different eigen values: α
1> 0 and α
2= 0. Then the function T (X, x, ξ, δ)
= T ((U
1, U
2), (u
1, u
2), σ
21, σ
2) = S
1(1/S
2+ α
1σ
12/u
2) satisfies conditions (i)- (iii) as a generalized test variable. Its observed value t
obs= u
1/u
2does not depend on the unknown parameters, the distribution of T is independent of the nuisance parameter σ
2and T is stochastically increasing in σ
21. It can be shown that the generalized test of (5) is unique, i.e., all generalized test constructed on the base of statistics U
1, U
2can be based on the test variable T (see Weerahandi, 1995, Theorem 5.2).
In the general case when h > 2, as does not exist the uniformly most accurate 1 − p confidence interval on θ (see Michalski 1995, 2003), the same, a choice of the generalized test of (5) is not unique. Zhou and Mathew (1994) showed this using a test variable as follows
(16) T
1= T
1c1,...,ch−1=
h−1
X
i=1
c
iS
i1
S
h+ α
iσ
12u
h,
which is a generalized test variable for testing (5) for arbitrary positive real
numbers c
i, i = 1, ..., h − 1. A problem arises how to choose these constants.
For testing (5) with σ
20= 0 the test variable T
1coincides under H
σ, for a special sets of constants c
i, with following statistics:
– Wald’s statistic F
W=
Ph−1
i=1 Ui
Uh
for c
i= 1 (Seely and El-Bassiouni, 1983) – modified Wald’s statistic F
W∗=
Ph−1
i=1 αiUi
Uh
for c
i= α
ior more generally,
F (u, v) =
h−1
X
i=1
α
iw
i(u, v)U
iU
h, u, v ≥ 0 for c
i= α
iw
i(u, v) and
w
i(u, v) = 1
(1 + uα
i)
2+ vα
2i(Gnot and Michalski, 1991, 1994). It is clear that F
W∗= F (0, 0).
Now, in accordance with Section 2.1, consider Bayes estimators ˆ σ
12(u, v) and ˆ σ
12(∞) from a class A
IU. The estimators can be written in the form P
hi=1
c
λiS
i(σ
2+α
iσ
12), where constants c
λiare such that the unbiasedness con- dition holds and we have additionally to calculate λ = (λ
1, λ
2) (see Subsec- tion 2.1). For the limiting estimator ˆ σ
12(∞), we obtain c
λi= 1/(α
irank(W )), where rank(W )= P
h−1i=1
ν
i, and c
λh= − P
h−1 i=1 νiαi
/(ν
hrank(W )). To obtain independence in the above expression on σ
2we multiply this parameter by the observed value u
hand next we divide by the random variable U
h. In case all c
λi, i = 1, ..., h − 1 are positive, it is easy to check that we get a generalized test variable:
(17) T
λ=
h−1
X
i=1
c
λiS
iu
hS
h+ α
iσ
12+ c
λhu
h,
which is stochastically increasing in σ
21and whose observed value is t
λobs= P
hi=1
c
λiu
i. Let π
T(u
1, ..., u
h, σ
12) denote the data-based power function for
testing (5) based on test variable T (see Definition 3.2). The data-based
power function corressponding to the test variable T
λis of the form:
(18)
π
Tλu
1, ..., u
h, σ
21= P r n
T
λ≥ t
λobs|σ
21o
= P r (
h−1X
i=1
c
λiS
iu
hS
h+ α
iσ
21≥
h−1
X
i=1
c
λiu
i|σ
12)
= P r (
h−1X
i=1
c
λiS
i1
S
h+ α
iσ
21u
h≥
h−1
X
i=1
c
λiu
iu
h| σ
12) .
The last equality implies that the test based on the variable T
λwith constants c
i= c
λifor i = 1, ..., h − 1 coincides with the test based on T
1and is stochastically increasing in σ
21, and its observed value is t
λobs= P
h−1i=1
c
λiu
i/u
h. Below, we put together formerly described the following generalized test variables :
(19) T
11= T
11,...,1=
h−1
X
i=1
S
i1
S
h+ α
iσ
12u
hwith t
1obs=
h−1
X
i=1
u
iu
h,
(20)
T
1α= T
1α1,...,αh−1=
h−1
X
i=1
α
iS
i1
S
h+ α
iσ
21u
hwith
t
αobs=
h−1
X
i=1
α
iu
iu
h,
(21)
T
1∞= T
11/α1,...,1/αh−1=
h−1
X
i=1
1 α
iS
i1
S
h+ α
iσ
21u
hwith
t
∞obs=
h−1
X
i=1
u
iα
iu
h.
And general in connection with a BIQUE ˆ σ
21(u, v) we obtain
(22)
T
λ= T
cλ 1,...,cλh−1
1
=
h−1
X
i=1
c
λiS
i1
S
h+ α
iσ
12u
hwith
t
λobs=
h−1
X
i=1
c
λiα
iu
iu
h,
where c
λi= (λ
1α
i+ λ
2)w
i,
λ
1=
h
X
i
w
iν
ih
X
i
w
iα
2iν
ih
X
i
w
iν
i−
h
X
i
w
iα
iν
i!
2,
λ
2=
−
h
X
i
w
iα
iν
ih
X
i
w
iα
2iν
ih
X
i
w
iν
i−
h
X
i
w
iα
iν
i!
2and
w
i= 1
(1 + uα
i)
2+ vα
2i, for u, v ≥ 0.
The values of data-based power function based on T
1are calculated from the following equality:
π
T1(u
1, ..., u
h, σ
21) = P r (
h−1X
i=1
c
iS
i1
S
h+ α
iσ
21u
h≥
h−1
X
i=1
c
iu
iu
h| σ
12)
= Z
∞0
1 − F
v h−1X
i=1
c
iu
iu
h!!
f
Vh(v)dv,
where F
vdenotes the distribution function of a linear combination of independent central χ
2random variables, i.e.,
h−1
X
i=1
c
i1
v + α
iσ
12u
hS
i∼
h−1
X
i=1
c
i1
v + α
iσ
21u
hχ
2νiand f
Vhis density function of a χ
2νhdistribution. Using Imhof’s (1961) algorithm or the one given by Davies (1980) we may calculate probabilities for any quadratic form Q = P
kj=1
b
jχ
2νj.
It is worth stressing that in the general case (here for h > 2) on ac- count of nonuniqueness in testing (5) we have a great set of addmissible test variables not only over a choice of constants c
iin T
1. Weerahandi (1995) proposed in a mixed one-way classification unbalanced model the generalized test variable T
2as follows
(23) T
2=
h−1
X
i=1
S
i−
h−1
X
i=1
u
iS
hu
h+ α
iσ
12S
h(see also Arendack´a, 2005). It is easy to check that T
2is stochastically increasing, its observed value is 0, and its the data-power function has a simplified form, and so more convenient in computing, namely
π
T2(u
1, ..., u
h, σ
12) = P r (
h−1X
i=1
S
i−
h−1
X
i=1
u
iS
hu
h+ α
iσ
12S
h≥ 0 | σ
12)
= P r (
h−1X
i=1
S
i≥
h−1
X
i=1
u
iS
hu
h+ α
iσ
12S
h| σ
12) ,
where
h−1
X
i=1
S
i∼ χ
2ν1+...+νh−1.
4. Comparison - numerical calculations
In this section we compare effects of the test variables which were presented previously and used for constructing confidence intervals on the variance component σ
12in two examples of model (5). A special case of model (1) is a mixed two-way classification model
y
ijk= β
j+ η
i+
ijk, i = 1, ..., s; j = 1, ..., b; k = 1, ..., n
ij,
corresponding to block design BD(s, b, n, N ), in which n experimental units are arranged in b blocks (with fixed effects β
j) and treated by s treatment (with random effects η
i) according to the incidence matrix N = ∆D
0. The matrix form of the above model can be presented as follows
y = D
0β + ∆
0η + .
In the numerical calculations we use the (s x b) matrix C associated with the block design and given by C = diag{r
1, ...r
s}−Ndiag{1/k
1, ..., 1/k
b}N
0, where ∆1
n= N 1
b= (r
1, ..., r
s) is the vector of treatment replications and D1
n= N
01
s= (k
1, ..., k
b) is the vector of block sizes, n = P
ij