• Nie Znaleziono Wyników

Central limit theorems for functionals of large sample covariance matrix and mean vector in matrix-variate location mixture of normal distributions

N/A
N/A
Protected

Academic year: 2021

Share "Central limit theorems for functionals of large sample covariance matrix and mean vector in matrix-variate location mixture of normal distributions"

Copied!
34
0
0

Pełen tekst

(1)

Delft University of Technology

Central limit theorems for functionals of large sample covariance matrix and mean vector

in matrix-variate location mixture of normal distributions

Bodnar, Taras; Mazur, Stepan; Parolya, Nestor DOI

10.1111/sjos.12383

Publication date 2019

Document Version

Accepted author manuscript Published in

Scandinavian Journal of Statistics

Citation (APA)

Bodnar, T., Mazur, S., & Parolya, N. (2019). Central limit theorems for functionals of large sample

covariance matrix and mean vector in matrix-variate location mixture of normal distributions. Scandinavian Journal of Statistics, 46(2), 636-660. https://doi.org/10.1111/sjos.12383

Important note

To cite this publication, please use the final published version (if applicable). Please check the document version above.

Copyright

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy

Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.

(2)

Central limit theorems for functionals of large sample covariance matrix and mean vector in matrix-variate location mixture of normal distributions

Taras Bodnara, Stepan Mazurb, Nestor Parolyac,∗

a Department of Mathematics, Stockholm University b

Department of Statistics, ¨Orebro University

c Institute of Statistics, Leibniz University of Hannover

Abstract

In this paper we consider the asymptotic distributions of functionals of the sample co-variance matrix and the sample mean vector obtained under the assumption that the matrix of observations has a matrix-variate location mixture of normal distributions. The central limit theorem is derived for the product of the sample covariance matrix and the sample mean vector. Moreover, we consider the product of the inverse sample covariance matrix and the mean vector for which the central limit theorem is established as well. All results are obtained under the large-dimensional asymptotic regime where the dimension p and the sample size n approach to infinity such that p/n → c ∈ [0, +∞) when the sample covariance matrix does not need to be invertible and p/n → c ∈ [0, 1) otherwise.

Keywords: large dimensional asymptotics, normal mixtures, random matrix theory, skew normal distribution, stochastic representation.

(3)

1

Introduction

The functions of the sample covariance matrix and the sample mean vector appear in various statistical applications. The classical improvement techniques for the mean estimation have al-ready been discussed by Stein (1956) and Jorion (1986). In particular, Efron (2006) constructed confidence regions of smaller volume than the standard spheres for the mean vector of a multi-variate normal distribution. Fan et al. (2008), Bai and Shi (2011), Bodnar and Gupta (2011), Cai and Zhou (2012), Cai and Yuan (2012), Fan et al. (2013), Bodnar et al. (2014a), Wang et al. (2015), Bodnar et al. (2016) among others suggested improved techniques for the estimation of covariance matrix and precision matrix (the inverse of covariance matrix).

In our work we introduce the family of matrix-variate location mixture of normal distribu-tions (MVLMN) which is a generalization of the models considered by Azzalini and Dalla-Valle (1996), Azzalini and Capitanio (1999), Azzalini (2005), Liseo and Loperfido (2003, 2006), Bar-toletti and Loperfido (2010), Loperfido (2010), Christiansen and Loperfido (2014), Adcock et al. (2015), De Luca and Loperfido (2015) among others. Under the assumption of MVLMN we consider the expressions for the sample mean vector x and the sample covariance matrix S. In particulary, we deal with two products l>Sx and l>S−1x where l is a non-zero vector of constants. It is noted that this kind of expressions has not been intensively considered in the literature, although they are present in numerous important applications. The first application of the products arises in the portfolio theory, where the vector of optimal portfolio weights is proportional to S−1x. The second application is in the discriminant analysis where the coeffi-cients of the discriminant function are expressed as a product of the inverse sample covariance matrix and the difference of the sample mean vectors.

Bodnar and Okhrin (2011) derived the exact distribution of the product of the inverse sample covariance matrix and the sample mean vector under the assumption of normality, while Kot-siuba and Mazur (2015) obtained its asymptotic distribution as well as its approximate density based on the Gaussian integral and the third order Taylor series expansion. Moreover, Bodnar et al. (2013, 2014b) analyzed the product of the sample (singular) covariance matrix and the sample mean vector. In the present paper, we contribute to the existing literature by deriving the central limit theorems (CLTs) under the introduced class of matrix-variate distributions in the case of the high-dimensional observation matrix. Under the considered family of distribu-tions, the columns of the observation matrix are not independent anymore and, thus, the CLTs cover a more general class of random matrices.

(4)

comparable to the number of features (dimension) and so the sample covariance matrix and the sample mean vector are not the efficient estimators anymore. For example, stock markets include a large number of companies which is often close to the number of available time points. In order to understand better the statistical properties of the traditional estimators and tests based on high-dimensional settings, it is of interest to study the asymptotic distribution of the above mentioned bilinear forms involving the sample covariance matrix and the sample mean vector.

The appropriate central limit theorems, which do not suffer from the “curse of dimension-ality” and do not reduce the number of dimensions, are of great interest for high-dimensional statistics because more efficient estimators and tests may be constructed and applied in prac-tice. The classical multivariate procedures are based on the central limit theorems assuming that the dimension p is fixed and the sample size n increases. However, numerous authors provide quite reasonable proofs that this assumption does not lead to precise distributional approxima-tions for commonly used statistics, and that under increasing dimension asymptotics the better approximations can be obtained [see, e.g., Bai and Silverstein (2004) and references therein]. Technically speaking, under the high-dimensional asymptotics we understand the case when the sample size n and the dimension p tend to infinity, such that their ratio p/n converges to some positive constant c. Under this condition the well-known Marˇcenko-Pastur and Silverstein equations were derived [see, Marˇcenko and Pastur (1967), Silverstein (1995)].

The rest of the paper is structured as follows. In Section 2 we introduce a semi-parametric matrix-variate location mixture of normal distributions. Main results are given in Section 3, where we derive the central limit theorems under high-dimensional asymptotic regime of the (inverse) sample covariance matrix and the sample mean vector under the MVLMN. Section 4 presents a short numerical study in order to verify the obtained analytic results.

2

Semi-parametric family of matrix-variate location mixture of

normal distributions

In this section we introduce the family of MVLMN which generalizes the existent families of skew normal distributions.

(5)

Let X =      x11 . . . x1n .. . . .. ... xp1 . . . xpn      = (x1, ..., xn) ,

be the p × n observation matrix where xj is the jth observation vector. In the following, we

assume that the random matrix X possesses a stochastic representation given by

X= Y + Bν1d >n, (1)

where Y ∼ Np,n(µ1>n, Σ ⊗ In) (p × n-dimensional matrix-variate normal distribution with mean

matrix µ1>n and covariance matrix Σ ⊗ In), ν is a q-dimensional random vector with continuous

density function fν (·), B is a p × q matrix of constants. Further, it is assumed that Y and ν are independently distributed. If random matrix X follows model (1), then we say that X is MVLMN distributed with parameters µ, Σ, B, and fν (·). The first three parameters are finite dimensional, while the fourth parameter is infinite dimensional. This makes model (1) to be of a semi-parametric type. The assertion we denote by X ∼ LMNp,n;q(µ, Σ, B; fν ). If fν can be

parametrized by finite dimensional parameter θ, then (1) reduces to a parametrical model which is denoted by X ∼ LMNp,n;q(µ, Σ, B; θ). If n = 1, then we use the notation LMNp;q(·, ·, ·; ·)

instead of LMNp,1;q(·, ·, ·; ·).

From (1) the density function of X is expressed as

fX(Z) = Z Rq fNp,n(µ1> n,Σ⊗In)(Z − Bν ∗1> n)fν (ν∗)dν∗. (2)

In a special case when ν = |ψ| is the vector formed by the absolute values of every element in ψ where ψ ∼ Nq(0, Ω), i.e. ν has a q-variate truncated normal distribution, we get

Proposition 1. Assume model (1). Let ν = |ψ| with ψ ∼ Nq(0, Ω). Then the density function

of X is given by fX(Z) = eC−1Φq  0; −DEvec(Z − µ1>n), Dφpn  vec(Z − µ1>n); 0, F (3)

where D = (nB>Σ−1B + Ω−1)−1, E = 1>n ⊗ B>Σ−1, F = (In⊗ Σ−1− E>DE)−1, and

e

C−1 = C−1|F|

1/2|D|1/2

(6)

The proof of Proposition 1 is given in the Appendix.

It is remarkable that model (1) includes several skew-normal distributions considered by Azzalini and Dalla-Valle (1996), Azzalini and Capitanio (1999), Azzalini (2005). For example, in case of n = 1, q = 1, µ = 0, B = ∆1p, and Σ = (Ip− ∆2)1/2Ψ(Ip− ∆2)1/2 we get

X= (Id p− ∆2)1/2v0+ ∆1p|v1|, (4)

where v0 ∼ Np(0, Ψ) and v1 ∼ N (0, 1) are independently distributed; Ψ is a correlation matrix

and ∆ = diag(δ1, ..., δp) with δj ∈ (−1, 1). Model (4) was previously introduced by Azzalini

(2005).

Moreover, model (1) also extends the classical random coefficient growth-curve model (see, Potthoff and Roy (1964), Amemiya (1994) among others), i.e., the columns of X can be rewritten in the following way

xi d

= µ + Bν + εi, i = 1, . . . , n , (5)

where ε1, . . . , εn are i.i.d. Np(0, Σ), ν ∼ fν and ε1, . . . , εn, ν are independent. In the random

coefficient growth-curve model, it is typically assumed that εi ∼ Np(0, σ2I) and ν ∼ Nq(0, Ω)

(see, e.g., Rao (1965)). As a result, the suggested MVLMN model may be useful for studying the robustness of the random coefficients against the non-normality.

3

CLTs for expressions involving the sample covariance matrix

and the sample mean vector

The sample estimators for the mean vector and the covariance matrix are given by

x = 1 n n X i=1 xi= 1 nX1n and S = 1 n − 1 n X i=1 (xi− x)(xi− x)>= XVX>,

where V = In−n11n1>n is a symmetric idempotent matrix, i.e., V = V> and V2 = V.

The following proposition shows that x and S are independently distributed and presents their marginal distributions under model (1). Moreover, its results lead to the conclusion that the independence of x and S could not be used as a characterization property of a multivariate normal distribution if the observation vectors in data matrix are dependent.

(7)

(a) (n−1)S ∼ Wp(n−1, Σ) (p-dimensional Wishart distribution for p ≤ n−1 and p-dimensional

singular Wishart distribution for p > n − 1 with (n − 1) degrees of freedom and covariance matrix Σ),

(b) x ∼ LMNp;q µ,1nΣ, B; fν,

(c) S and x are independently distributed.

Proof. The statements of the proposition follow immediately from the fact that

¯ x = 1 n n X i=1 xi= ¯y + Bν and S = XVX>= YVY>. (6)

Indeed, from (6), the multivariate normality of Y and the independence of Y and ν, we get that x and S are independent; S is (singular) Wishart distributed with (n − 1) degrees of freedom and covariance matrix Σ; x has a location mixture of normal distributions with parameters µ,

1

nΣ, B and fν .

For the validity of the asymptotic results presented in Sections 3.1 and 3.2 we need the following two conditions

(A1) Let (λi, ui) denote the set of eigenvalues and eigenvectors of Σ. We assume that there

exist m1 and M1 such that

0 < m1≤ λ1 ≤ λ2≤ ... ≤ λp ≤ M1 < ∞

uniformly in p.

(A2) There exists M2< ∞ such that

|u>i µ| ≤ M2 and |u>i bj| ≤ M2 for all i = 1, ..., p and j = 1, ..., q

uniformly in p where bj, j = 1, ..., q, are the columns of B.

Generally, we say that an arbitrary p-dimensional vector l satisfies the condition (A2) if |u>i l| ≤ M2 < ∞ for all i = 1, . . . , p.

Assumption (A1) is a classical condition in random matrix theory (see, Bai and Silverstein (2004)), which bounds the spectrum of Σ from below as well as from above. Assumption (A2) is a technical one. In combination with (A1) this condition ensures that p−1µ>Σµ, p−1µ>Σ−1µ, p−1µ>Σ3µ, p−1µ>Σ−3µ, as well as that all the diagonal elements of B>ΣB, B>Σ3B, and

(8)

B>Σ−1B are uniformly bounded. All these quadratic forms are used in the statements and the proofs of our results. Note that the constants appearing in the inequalities will be denoted by M2 and may vary from one expression to another. We further note, that assumption (A2) is

automatically fulfilled if µ and B are sparse (not Σ itself). More precisely, a stronger condition, which verifies (A2) is kµk < ∞ and kbjk < ∞ for j = 1, . . . , q uniformly in p. Indeed, using

the Cauchy-Schwarz inequality to (u>i µ)2 we get

(u>i µ)2≤ kuik2kµk2 = kµk2 < ∞ . (7)

It is remarkable that kµk < ∞ is fulfilled if µ is indeed a sparse vector or if all its elements are of order O p−1/2. Thus, a sufficient condition for (A2) to hold would be either sparsity of µ and B or their elements are reasonably small but not exactly equal to zero. Note that the assumption of sparsity for B is quite natural in the context of high-dimensional random coefficients regression models. It implies that if B is large dimensional, then it may have many zeros and there exists a small set of highly significant random coefficients which drive the random effects in the model (see, e.g., B¨uhlmann and van de Geer (2011)). Moreover, it is hard to estimate µ in a reasonable way when kµk → ∞ as p → ∞ (see, e.g., Bodnar et al. (2018b)). In general, however, kµk and kbjk do not need to be bounded, in such case the sparsity of eigenvectors of matrix Σ may

guarantee the validity of (A2). Consequently, depending on the properties of the given data set the proposed model may cover many practical problems.

3.1 CLT for the product of sample covariance matrix and sample mean vector

In this section we present the central limit theorem for the product of the sample covariance matrix and the sample mean vector.

Theorem 1. Assume X ∼ LMNp,n;q(µ, Σ, B; fν ) with Σ positive definite and let p/n =

c + o(n−1/2), c ∈ [0, +∞) as n → ∞. Let l be a p-dimensional vector of constants that satisfies condition (A2). Then, under (A1) and (A2) it holds that

√ nσν−1  l>Sx − l>Σµν  D −→ N (0, 1) for p/n → c ∈ [0, +∞) as n → ∞ , (8) where µν = µ + Bν, (9) σν2 =  µ>ν Σµν + ctr(Σ 2) p  l>Σl + (l>Σµν )2+ l>Σ3l . (10)

(9)

Proof. First, we consider the case of p ≤ n−1, i.e., S has a Wishart distribution. Let L = (l, x)> and define eS = LSL>= {eSij}i,j=1,2with eS11= l>Sl, eS12= l>Sx, eS21= x>Sl, and eS22= x>Sx.

Similarly, let eΣ = LΣL> = { eΣij}i,j=1,2 with eΣ11 = l>Σl, eΣ12 = l>Σx, eΣ21 = x>Σl, and

e

Σ22= x>Σx.

Since S and x are independently distributed, S ∼ Wp



n − 1, 1

n − 1Σ 

and rank L = 2 ≤ p with probability one, we get from Theorem 3.2.5 of Muirhead (1982) that eS | x ∼ W2



n − 1, 1

n − 1Σe 

. As a result, the application of Theorem 3.2.10 of Muirhead (1982) leads to e S12| eS22, x ∼ N  e Σ12Σe−122Se22, 1 n − 1Σe11·2Se22  ,

where eΣ11·2= eΣ11− eΣ212/eΣ22 is the Schur complement.

Let ξ = (n − 1) eS22/eΣ22, then l>Sx | ξ, x ∼ N  ξ n − 1l > Σx, ξ (n − 1)2[x > Σxl>Σl − (x>Σl)2]  .

From Theorem 3.2.8 of Muirhead (1982) it follows that ξ and x are independently distributed and ξ ∼ χ2n−1. Hence, the stochastic representation of l>Sx is given by

l>Sx=d ξ n − 1l > Σx + r ξ n − 1(x > Σxl>Σl − (l>Σx)2)1/2√z0 n − 1, (11)

where ξ ∼ χ2n−1, z0∼ N (0, 1), x ∼ LMNp;q µ,1nΣ, B; fν; ξ, z0 and x are mutually

indepen-dent. The symbol ”=” stands for the equality in distribution.d

It is remarkable that the stochastic representation (11) for l>Sx remains valid also in the case of p > n − 1 following the proof of Theorem 4 in Bodnar et al. (2014b). Hence, we will make no difference between these two cases in the remaining part of the proof.

From the properties of χ2-distribution and using the fact that n/(n − 1) → 1 as n → ∞, we immediately receive √ n ξ n− 1  D −→ N (0, 2) as n → ∞ . (12)

We further get that √n(z0/

n) ∼ N (0, 1) for all n and, consequently, it is also its asymptotic distribution.

(10)

ν. For any a1 and a2, we consider a1x>Σx + 2a2l>Σx = a1  x + a2 a1 l > Σ  x +a2 a1 l  −a 2 2 a1 l>Σl = a1x˜>Σ˜x − a22 a1 l>Σl , where ˜x = ¯x + a2 a11 and ˜x | ν ∼ Np µa,ν , 1 nΣ with µa,ν = µ + Bν + a2 a1 l. By Provost and Rudiuk (1996) the stochastic representation of ˜x>Σ˜x is given by

˜ x>Σ˜x=d 1 n p X i=1 λ2iξi,

where ξ1, ..., ξp given ν are independent with ξi | ν ∼ χ21(δ2i), δi =

nλ−1/2i u>i µa,ν . Here, the symbol χ2

d(δ2i) denotes a chi-squared distribution with d degrees of freedom and non-centrality

parameter δ2i.

Now, we apply the Lindeberg CLT to the conditionally independent random variables Vi =

λ2iξi/n. For that reason, we need first to verify the Lindeberg’s condition. Denoting σn2 =

V(Pp i=1Vi| ν) we get σ2n= p X i=1 V λ2i nξi | ν  = p X i=1 λ4 i n22(1 + 2δ 2 i) = 1 n2 2tr(Σ 4) + 4nµ0 a,ν Σ3µa,ν  (13)

We need to check if for any small ε > 0 it holds that

lim n→∞ 1 σ2 n p X i=1 Eh(Vi−E(Vi))21{|Vi−E(Vi)|>εσn} | ν i −→ 0 . First, we get p X i=1 Eh(Vi−E(Vi| ν)) 2 1{|Vi−E(Vi|ν)|>εσn}| ν i Cauchy−Schwarz ≤ p X i=1 E1/2h(V i−E(Vi| ν)) 4 | νi P1/2{|V i−E(Vi| ν)| > εσn| ν} Chebyshev ≤ p X i=1 λ4i n2 q 12(1 + 2δ2 i)2+ 48(1 + 4δi2) σi εσn

(11)

with σ2 i =V(Vi| ν) and, thus, 1 σ2 n p X i=1 Eh(Vi−E(Vi| ν)) 21 {|Vi−E(Vi|ν)|>εσn}| ν i ≤ 1 ε Pp i=1λ 4 ip12(1 + 2δ 2 i)2+ 48(1 + 4δ 2 i) σi σn 2 tr(Σ4) + 4nµ>a,ν Σ3µa,ν = √ 3 ε Pp i=1λ 4 ip(5 + 2δ2i)2− 20 σi σn tr(Σ4) + 2nµ>a,ν Σa,ν ≤ √ 3 ε Pp i=1λ4i(5 + 2δ2i) σi σn tr(Σ4) + 2nµ>a,ν Σa,ν ≤ √ 3 ε 5 tr(Σ4) + 2nµ>a,ν Σa,ν tr(Σ4) + 2nµ> a,ν Σ 3µ a,ν σmax σn ≤ √ 3 ε 4 1 + 2nµ>a,ν Σa,ν / tr(Σ4)+ 1 ! σmax σn ≤ 5 √ 3 ε σmax σn .

Finally, Assumptions (A1) and (A2) yield σ2max σ2 n = supiσ 2 i σ2 n = supiλ 4 i(1 + 2δ2i) tr(Σ4) + 2nµ>a,ν Σa,ν = supiλ4i + 2nλ3i(u>i µa,ν )2 tr(Σ4) + 2nµ>a,ν Σa,ν −→ 0 , (14) which verifies the Lindeberg condition since

(u>i µa,ν )2 =  u>i µ + u>i Bν + u>i la2 a1 2 = (u0iµ)2+  u>i la2 a1 2 + (u>i Bν)2+ 2u>i µ · u>i Bν + 2a2 a1 u>i l(u>i µ + u>i Bν) (A2) ≤ M22+ qM22ν>ν + M22a 2 2 a2 1 + 2M22pqν>ν + 2M22|a2| |a1| (1 +pqν>ν) = M22  1 +pqν>ν +|a2| |a1| 2 < ∞. (15)

Thus, using (13) and

p X i=1 E(Vi| ν) = p X i=1 λ2i n(1 + δ 2 i) = tr(Σ2)/n + µ>a,ν Σµa,ν (16) we get that √ nx˜ >Σ˜x − tr(Σ2)/n − µ> a,ν Σµa,ν q tr(Σ4)/n + 2µ0a,ν Σa,ν | ν−→ N (0, 2)D

(12)

and for a1x>Σx + 2a2l>Σx we have √ n a1x>Σx + 2a2l>Σx − a1 tr(Σ2)/n + µ>a,ν Σµa,ν + a2 2 a1 l>Σl q a21 tr(Σ4)/n + 2µ0a,ν Σa,ν | ν−→ N (0, 2) .D

Denoting a = (a1, 2a2)> and µν = µ + Bν we can rewrite it as

√ n  a>   x>Σx l>Σx  − a >   µ>ν Σµν + ctr(Σ 2 ) p l>Σµν    | ν D −→ N  0, a>   2ctr(Σ 4 ) p + 4µ > ν Σ3µν 2l>Σ3µν 2l>Σ3µν l>Σ3l  a  (17)

which implies that the vector√n  x>Σx − µ>ν Σµν − ctr(Σ 2 ) p , l >Σx − l> Σµν > has asymp-totically multivariate normal distribution conditionally on ν because the vector a is arbitrary.

Taking into account (17), (12) and the fact that ξ, z0 and x are mutually independent we

get √ n                 ξ n x>Σx l>Σx z0 √ n         −         1 µ>ν Σµν + ctr(Σ 2 ) p l>Σµν 0                 | ν−→ ND         0,         2 0 0 0 0 2ctr(Σ 4 ) p + 4µ > ν Σ3µν 2l>Σ3µν 0 0 2l>Σ3µν l>Σ3l 0 0 0 0 1                 .

The application of the multivariate delta method leads to √ nσν−1  l>Sx − l>Σµν  | ν −→ N (0, 1)D (18) where σν = (l2 >Σµν )2+ l>Σ3l + l>Σl  µ>ν Σµν + ctr(Σ 2) p 

The asymptotic distribution does not depend on ν and, thus, it is also the unconditional asymp-totic distribution.

Theorem 1 shows that properly normalized bilinear form l>Sx itself can be accurately ap-proximated by a mixture of normal distributions with both mean and variance depending on ν. Moreover, this central limit theorem delivers the following approximation for the distribution of l>Sx, namely for large n and p we have

p−1l>Sx | ν ≈ CN  p−1l>Σµν ,p −2σ2 ν n  , (19)

(13)

The proof of Theorem 1 shows, in particular, that its key point is a stochastic representation of the product l>Sx which can be presented using a χ2 distributed random variable, a standard normally distributed random variable, and a random vector which has a location mixture of normal distributions. Both assumptions (A1) and (A2) guarantee that the asymptotic mean p−1l>Σµν and the asymptotic variance p−2σ2ν are bounded with probability one and the co-variance matrix Σ is invertible as the dimension p increases. Note that the case of standard asymptotics can be easily recovered from our result if we set c → 0.

Although Theorem 1 presents a result similar to the classical central limit theorem, it is not a proper CLT because of ν which is random and, hence, it provides an asymptotic equivalence only. In the current settings it is not possible to provide a proper CLT without additional assumptions imposed on ν. The reason is the finite dimensionality of ν denoted by q which is fixed and independent of p and n. This assures that the randomness in ν will not vanish asymptotically. So, in order to justify a classical CLT we need to have an increasing value of q = q(n) with the following additional assumptions:

(A3) Let E(ν) = ω and Cov(ν) = Ω. For any vector l it holds that √ ql >ν − l>ω √ l>Ωl D −→ ˜z0 (20)

where the distribution of ˜z0 may in general depend on l.

(A4) There exists M2< ∞ such that

|u>i Bω| ≤ M2 for all i = 1, ..., p uniformly in p.

Moreover, BΩB> has an uniformly bounded spectral norm.

Assumption (A3) assures that the vector of random coefficients ν satisfies itself a sort of con-centration property in the sense that any linear combination of its elements asymptotically concentrates around some random variable. In a special case of ˜z0 being standard normally

distributed (A3) will imply a usual CLT for a linear combination of vector ν. As a result, assumption (A3) is a pretty general and natural condition on ν by taking into account that it is assumed ν ∼ Nq(ω, Ω) (see, e.g., Rao (1965)) in many practical situations. In this case

Assumption (A4) requires that Ω is of order q−1, i.e., the high-dimensional distribution of ν is a well-defined mathematical object. Assumption (A4) is similar to (A1) and (A2) and it ensures that µ and Bω as well as Σ and BΩB> have the same behaviour as p → ∞.

(14)

Remark 1. Note that the assumption (A4) is really needed in order to provide a proper definition of the distribution of ν in the high-dimensional case. For instance, we consider ν ∼ N 0,1qI. Then the moment-generating function of ν is given by mν (q) = exp12t>qt, t ∈ Rq,which appears to be finite for all vectors t ∈ Rq with finite components.

Corollary 1 (CLT). Under the assumptions of Theorem 1, assume (A3) and (A4). Let l>Σl = O(p) and µ>Σµ = O(p). Then it holds that

nσ−1l>S¯x − l>Σ(µ + Bω)−→ N (0, 1)D

for p/n = c + o n−1/2 with c ∈ [0, +∞) and q/n → γ with γ > 0 as n → ∞ where

σ2 =  (µ + Bω)>Σ(µ + Bω) + ctr(Σ 2) p  l>Σl + (l>Σ(Bω + µ))2+ l>Σ3l . Proof. Since (u>i µa,ν )2 =  u>i µ + u>i Bω + u>i B(ν − ω) + u>i la2 a1 2 (A2),(A4) ≤ M22+ M22+ M22q(ν − ω)>(ν − ω) + M22a 2 2 a21 + 2M22  1 + 2|a2| |a1|  + 2M22  2 +|a2| |a1|  q q(ν − ω)>(ν − ω) < ∞

with probability one for q

n → γ as n → ∞, we get from the proof of Theorem 1 and assumption (A3) that √ nσ−1l>S¯x − l>Σ(µ + Bω)= σν σ z0+ √ n √ q √ l>ΣBΩB>Σl σν z˜0 ! + oP(1) , (21)

where z0 ∼ N (0, 1) and ˜z0 are independently distributed.

First, we note that under (A3) it holds that σν2 σ2

a.s.

→ 1 (22)

for p/n = c + o n−1/2 with c ∈ [0, +∞) and q/n → γ with γ > 0 as n → ∞.

(15)

for any vector m that satisfies (A2), we get m>ΣBΩB>Σm ≤ λmax(Σ)2λmax(UTBΩB>U) p X i=1 |m>ui| = O(p),

where U is the eigenvector matrix of Σ which is a unitary matrix and, consequently, (see, e.g., Chapter 8.5 in L¨utkepohl (1996))

λmax(UTBΩB>U) = λmax(BΩB>) = O(1).

As a result, we get l>ΣB(ν − ω) = OP(1) and (µ + Bω)>ΣB(ν − ω) = OP(1).

Finally, in using (22) and the fact that σ2

ν = OP(p2) as discussed after the proof of Theorem

1, we get

l>ΣBΩB>Σl

σν → 0

for p/n = c + o n−1/2 with c ∈ [0, +∞) and q/n → γ with γ > 0 as n → ∞ and, consequently, the second summand in (21) will disappear.

Note that the asymptotic regime q/n → γ > 0 ensures that the dimension of the random coefficients is not increasing much faster than the sample size, i.e., q and n are comparable. It is worth mentioning that due to Corollary 1 the asymptotic distribution of the bilinear form l>S¯x does not depend on the covariance matrix of the random effects Ω. This knowledge may be further useful in constructing a proper pivotal statistic.

3.2 CLT for the product of inverse sample covariance matrix and sample

mean vector

In this section we consider the distributional properties of the product of the inverse sample covariance matrix S−1 and the sample mean vector x. Again we prove that proper weighted bilinear forms involving S−1 and x have asymptotically a normal distribution. This result is summarized in Theorem 2.

Theorem 2. Assume X ∼ LMNp,n;q(µ, Σ, B; fν ), p < n − 1, with Σ positive definite and let

p/n = c + o(n−1/2), c ∈ [0, 1) as n → ∞. Let l be a p-dimensional vector of constants that satisfies (A2). Then, under (A1) and (A2) it holds that

√ n˜σν−1  l>S−1x − 1 1 − cl >Σ−1 µν  D −→ N (0, 1) (23)

(16)

where µν = µ + Bν and ˜ σν2 = 1 (1 − c)3   l>Σ−1µν2+ l>Σ−1l(1 + µ>ν Σ−1µν )  .

Proof. From Theorem 3.4.1 of Gupta and Nagar (2000) and Proposition 2 we get

S−1 ∼ IWp n + p, (n − 1)Σ−1 . It holds that l>S−1x = (n − 1)x>Σ−1xl >S−1x x>S−1x x>S−1x (n − 1)x>Σ−1x.

Since S−1 and x are independently distributed, we get from Theorem 3.2.12 of Muirhead (1982) that e ξ = (n − 1)x >Σ−1x x>S−1x ∼ χ 2 n−p (24)

and it is independent of x. Moreover, the application of Theorem 3 in Bodnar and Okhrin (2008) proves that x>S−1x is independent of l>S−1x/x>S−1x for given x. As a result, it is also independent of x>Σ−1x · l>S−1x/x>S−1x for given x and, consequently,

(n − 1)x>Σ−1xl >S−1x x>S−1x and x>S−1x (n − 1)x>Σ−1x. are independent.

From the proof of Theorem 1 of Bodnar and Schmid (2008) we obtain

(n − 1)x>Σ−1xl >S−1x x>S−1x | x ∼ t  n − p + 1; (n − 1)l>Σ−1x; (n − 1)2x >Σ−1x n − p + 1l > Rxl  , (25)

where Ra = Σ−1 − Σ−1aa>Σ−1/a>Σ−1a, a ∈ IRp, and the symbol t(k, µ, τ2) denotes a

t-distribution with k degrees of freedom, location parameter µ and scale parameter τ2. Combining (24) and (25), we get the stochastic representation of l>S−1x given by

l>S−1x =d ξe−1(n − 1)  l>Σ−1x + t0 s x>Σ−1x n − p + 1 · l >R xl   = ξe−1(n − 1)  l>Σ−1x + √ t0 n − p + 1 √ l>Σ−1l q x>Rlx  ,

(17)

where (see, Proposition 2) l>Σ−1x= ld >Σ−1(µ + Bν) + √ l>Σ−1l √ n z 0 0,

with eξ ∼ χ2n−p, z00∼ N (0, 1), and t0 ∼ t(n−p+1, 0, 1); eξ, ν, z00, and t0are mutually independent.

Since RlΣRl= Rl, tr(RlΣ) = p − 1, and RlΣΣ−1l = 0, the application of Corollary 5.1.3a

and Theorem 5.5.1 in Mathai and Provost (1992) leads to

nx>Rlx | ν ∼ χ2p−1(nδ2(ν)) with δ2(ν) = (µ + Bν)>Rl(µ + Bν) (26)

as well as l>Σ−1x and x>Rlx are independent given ν. Finally, using the stochastic

represen-tation of a t-distributed random variable, we get

t0 =d z000 q ζ n−p+1 (27)

with ζ ∼ χ2n−p+1, z000 ∼ N (0, 1) and z00 ∼ N (0, 1) being mutually independent. Together with (26) it yields l>S−1x =d ξe−1(n − 1)  l>Σ−1(µ + Bν) + z 0 0 √ n p l>Σ−1l + z 00 0 √ n √ l>Σ−1l r p − 1 n − p + 1η  = ξe−1(n − 1)  l>Σ−1(µ + Bν) + √ l>Σ−1l r 1 + p − 1 n − p + 1η z0 √ n  , (28)

where the last equality in (28) follows from the fact that

z00 + z000 r p − 1 n − p + 1η d = z0 r 1 + p − 1 n − p + 1η with η = nx>Rlx/(p−1) ζ/(n−p+1) | ν ∼ Fp−1,n−p+1(nδ

2(ν)) (non-central F -distribution with p − 1 and

n − p + 1 degrees of freedom and non-centrality parameter nδ2(ν)). Moreover, we have that e

ξ ∼ χ2n−p and z0 ∼ N (0, 1); eξ, z0, and η are mutually independent.

From Lemma 6.4.(b) in Bodnar et al. (2018a) we get

√ n           e ξ/(n − p) η z0/ √ n      −      1 1 + δ2(ν)/c 0           | ν−→ ND         0,         2/(1 − c) 0 0 0 ση2 0 0 0 1 0 0 0                

(18)

for p/n = c + o(n−1/2), c ∈ [0, 1) as n → ∞ with σ2η = 2 c  1 + 2δ 2(ν) c  + 2 1 − c  1 +δ 2(ν) c 2 Consequently, √ n           e ξ/(n − 1) (p − 1)η/(n − p + 1) z0/ √ n      −      (1 − c) (c + δ2(ν))/(1 − c) 0           | ν D −→ N      0,      2(1 − c) 0 0 0 c2σ2η/(1 − c)2 0 0 0 1           for p/n = c + o(n−1/2), c ∈ [0, 1) as n → ∞.

Finally, the application of the delta-method (c.f. DasGupta (2008, Theorem 3.7)) leads to √ n  l>S−1x − 1 1 − cl >Σ−1(µ + Bν)  | ν −→ N (0, ˜D σν )2

for p/n = c + o(n−1/2), c ∈ [0, 1) as n → ∞ with

˜ σν =2 1 (1 − c)3  2l>Σ−1(µ + Bν)2+ l>Σ−1l(1 + δ2(ν))  . Consequently, √ n˜σ−1ν  l>S−1x − 1 1 − cl > Σ−1(µ + Bν)  | ν −→ N (0, 1) ,D

where the asymptotic distribution does not depend on ν. Hence, it is also the unconditional asymptotic distribution.

Again, Theorem 2 shows that the distribution of l>S−1x can be approximated by a mixture of normal distributions. Indeed,

p−1l>S−1x | ν ≈ CN p −1 1 − cl >Σ−1 µν ,p −2σ˜2 ν n  . (29)

In the proof of Theorem 2 we can read out that the stochastic representation for the product of the inverse sample covariance matrix and the sample mean vector is presented by using a χ2

(19)

distributed random variable, a general skew normally distributed random vector and a standard t-distributed random variable. This result is itself very useful and allows to generate the values of l>S−1x by just simulating three random variables from the standard univariate distributions and a random vector ν which determines the family of the matrix-variate location mixture of normal distributions. The assumptions about the boundedness of the quadratic and bilinear forms involving Σ−1 plays here the same role as in Theorem 1. Note that in this case we need no assumption either on the Frobenius norm of the covariance matrix or its inverse.

Finally, in Corollary 2 we formulate the CLT for the product of the inverse sample covariance matrix and the sample mean vector.

Corollary 2 (CLT). Under the assumptions of Theorem 2, assume (A3) and (A4). Let

l>Σ−1l = O(p) and µ>Σ−1µ = O(p). Then it holds that √ n˜σ−1  l>S−1x − 1 1 − cl > Σ−1(µ + Bω)  D −→ N (0, 1)

for p/n = c + o n−1/2 with c ∈ [0, 1) and q/n → γ with γ > 0 as n → ∞ where

˜ σ2 = 1 (1 − c)3  l>Σ−1(µ + Bω) 2 + l>Σ−1l(1 + (µ + Bω)>Σ−1(µ + Bω))  .

Proof. From the proof of Theorem 2 and assumption (A3) it holds that √ n˜σ−1  l>S−1x − 1 1 − cl >Σ−1(µ + Bω)  = σν˜ ˜ σ z0+ 1 1 − c √ n √ q √ l>Σ−1BΩB>Σ−1l ˜ σ z˜0+ oP(1) ,

where z0 ∼ N (0, 1) and ˜z0 are independently distributed.

Finally, in a similar way like in Corollary 1 we get from (A4) that

(1 − c)−2γ−1l >Σ−1BΩB>Σ−1l ˜ σ2 → 0, (30) ˜ σ2ν ˜ σ2 a.s. → 1 (31)

for p/n = c + o n−1/2 with c ∈ [0, 1) and q/n → γ with γ > 0 as n → ∞.

4

Numerical study

In this section we provide a Monte Carlo simulation study to investigate the performance of the suggested CLTs for the products of the (inverse) sample covariance matrix and the sample mean vector.

(20)

In our simulations we put l = 1p, each element of the vector µ is uniformly distributed

on [−1, 1] while each element of the matrix B is uniformly distributed on [0, 1]. Also, we take Σ as a diagonal matrix where each diagonal element is uniformly distributed on [0, 1]. It can be checked that in such a setting the assumptions (A1) and (A2) are satisfied. Indeed, the population covariance matrix satisfies the condition (A1) because the probability of getting exactly zero eigenvalue equals to zero. On the other hand, the condition (A2) is obviously valid too because the ith eigenvector of Σ is ui= ei = (0, . . . , 1

ith place, 0, . . . , 0)

0.

In order to define the distribution for the random vector ν, we consider two special cases. In the first case we take ν = |ψ|, where ψ ∼ Nq(0, Iq), i.e., ν has a q-variate truncated normal

distribution. In the second case we put ν ∼ GALq(Iq, 1q, 10), i.e., ν has a q-variate generalized

asymmetric Laplace distribution (c.f., Kozubowski et al. (2013)). Also, we put q = 10.

We compare the results for several values of c ∈ {0.1, 0.5, 0.8, 0.95}. The simulated data consist of N = 105 independent realizations which are used to fit the corresponding kernel density estimators with Epanechnikov kernel. The bandwith parameters are determined via cross-validation for every sample. The asymptotic distributions are simulated using the results of Theorems 1 and 2. The corresponding algorithm is given next:

a) generate ν = |ψ|, where ψ ∼ Nq(0q, Iq), or generate ν ∼ GALq(Iq, 1q, 10);

b) generate l>Sx by using the stochastic representation (11) obtained in the proof of Theorem 1, namely l>Sx=d ξ n − 1l >Σ(y + Bν) + √ ξ n − 1((y + Bν) >Σ(y + Bν)l>Σl − (l>Σ(y + Bν))2)1/2z 0,

where ξ ∼ χ2n−1, z0∼ N (0, 1), y ∼ Np(µ,n1Σ); ξ, z0, y, and ν are mutually independent

b’) generate l>S−1x by using the stochastic representation (28) obtained in the proof of The-orem 2, namely l>S−1x =d ξe−1(n − 1)  l>Σ−1(µ + Bν) + √ l>Σ−1l r 1 + p − 1 n − p + 1η z0 √ n  ,

where eξ ∼ χ2n−p, z0 ∼ N (0, 1), and η ∼ Fp−1,n−p+1(nδ2(ν)) with δ2(ν) = (µ+Bν)>Rl(µ+

Bν), Rl= Σ−1− Σ−1ll>Σ−1/l>Σ−1l; eξ, z0 and (η, ν) are mutually independent.

c) compute √ nσν−1  l>Sx − l>Σµν 

(21)

and √ n˜σν−1  l>S−1x − 1 1 − cl > Σ−1µν  where µν = µ + Bν σν2 = h µ>ν Σµν + ckΣk2F i l>Σl + (l>Σµν )2+ l>Σ3l ˜ σν2 = 1 (1 − c)3  2l>Σ−1µν2+ l>Σ−1l(1 + δ2(ν))  with δ2(ν) = µ>ν Rlµν , Rl = Σ−1− Σ−1ll>Σ−1/l>Σ−1l.

d) repeat a)-c) N times.

It is remarkable that for generating l>Sx and l>S−1x only random variables from the stan-dard distributions are need. Neither the data matrix X nor the sample covariance matrix S are used.

[ F igures 1 − 8 ]

In Figures 1-4 we present the results of simulations for the asymptotic distribution that is given in Theorem 1 while the asymptotic distribution as given in Theorem 2 is presented in Figures 5-8 for different values of c = {0.1, 0.5, 0.8, 0.95}. The suggested asymptotic distributions are shown as a dashed black line, while the standard normal distribution is a solid black line. All results demonstrate a good performance of both asymptotic distributions for all considered values of c. Even in the extreme case c = 0.95 our asymptotic results seem to produce a quite reasonable approximation. Moreover, we observe a good robustness of our theoretical results for different distributions of ν. Also, we observe that all asymptotic distributions are slightly skewed to the right for the finite dimensions. This effect is even more significant in the case of the generalized asymmetric Laplace distribution. Nevertheless, the skewness disappears with growing dimension and sample size, i.e., the distribution becomes symmetric one and converges to its asymptotic counterpart.

5

Summary

In this paper we introduce the family of the matrix-variate location mixture of normal distribu-tions that generalizes a large number of the existing skew normal models. Under the MVLMN we

(22)

derive the distributions of the sample mean vector and the sample covariance matrix. Moreover, we show that they are independently distributed. Furthermore, we derive the CLTs under the high-dimensional asymptotic regime for the products of the (inverse) sample covariance matrix and the sample mean vector. In the numerical study, the good finite sample performance of both asymptotic distributions is documented.

Acknowledgement

The authors are thankful to Professor Niels Richard Hansen, the Associate Editor, and two anonymous Reviewers for careful reading of the manuscript and for their suggestions which have improved an earlier version of this paper. The second author appreciates the financial support of the Swedish Research Council Grant Dnr: 2013-5180 and Riksbankens Jubileumsfond Grant Dnr: P13-1024:1.

Supporting information. Additional information for this article is available online including Appendix and figures.

References

Adcock, C., Eling, M., and Loperfido, N. (2015). Skewed distributions in finance and actuarial science: a review. The European Journal of Finance, 21:1253–1281.

Amemiya, Y. (1994). On multivariate mixed model analysis. Lecture Notes-Monograph Series, 24:83–95.

Azzalini, A. (2005). The skew-normal distribution and related multivariate families. Scandinavian Journal of Statistics, 32:159–188.

Azzalini, A. and Capitanio, A. (1999). Statistical applications of the multivariate skew-normal distribution. Journal of the Royal Statistical Society: Series B, 61:579–602.

Azzalini, A. and Dalla-Valle, A. (1996). The multivariate skew-normal distribution. Biometrika, 83:715–726.

Bai, J. and Shi, S. (2011). Estimating high dimensional covariance matrices and its applications. Annals of Economics and Finance, 12:199–215.

Bai, Z. D. and Silverstein, J. W. (2004). CLT for linear spectral statistics of large dimensional sample covariance matrices. Annals of Probability, 32:553–605.

(23)

Bartoletti, S. and Loperfido, N. (2010). Modelling air polution data by the skew-normal distri-bution. Stochastic Environmental Research and Risk Assessment, 24:513–517.

Bodnar, T. and Gupta, A. K. (2011). Estimation of the precision matrix of multivariate ellipti-cally contoured stable distribution. Statistics, 45:131–142.

Bodnar, T., Gupta, A. K., and Parolya, N. (2014a). On the strong convergence of the optimal linear shrinkage estimator for large dimensional covariance matrix. Journal of Multivariate Analysis, 132:215–228.

Bodnar, T., Gupta, A. K., and Parolya, N. (2016). Direct shrinkage estimation of large dimen-sional precision matrix. Journal of Multivariate Analysis, 146:223–236.

Bodnar, T., Hautsch, N., and Parolya, N. (2018a). Consistent estimation of the high dimensional efficient frontier. Technical report.

Bodnar, T., Mazur, S., and Okhrin, Y. (2013). On the exact and approximate distributions of the product of a wishart matrix with a normal vector. Journal of Multivariate Analysis, 125:176–189.

Bodnar, T., Mazur, S., and Okhrin, Y. (2014b). Distribution of the product of singular wishart matrix and normal vector. Theory of Probability and Mathematical Statistics, 91:1–14. Bodnar, T., Okhrin, O., and Parolya, N. (2018b). Optimal Shrinkage Estimator for

High-Dimensional Mean Vector. Journal of Multivariate Analysis, under revision.

Bodnar, T. and Okhrin, Y. (2008). Properties of the singular, inverse and generalized inverse partitioned wishart distributions. Journal of Multivariate Analysis, 99:2389–2405.

Bodnar, T. and Okhrin, Y. (2011). On the product of inverse wishart and normal distributions with applications to discriminant analysis and portfolio theory. Scandinavian Journal of Statistics, 38:311–331.

Bodnar, T. and Schmid, W. (2008). A test for the weights of the global minimum variance portfolio in an elliptical model. Metrika, 67(2):127–143.

B¨uhlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer Publishing Company, Incorporated, 1st edition.

Cai, T. T. and Yuan, M. (2012). Adaptive covariance matrix estimation through block thresh-olding. Ann. Statist., 40:2014–2042.

(24)

Cai, T. T. and Zhou, H. H. (2012). Optimal rates of convergence for sparse covariance matrix estimation. Ann. Statist., 40:2389–2420.

Christiansen, M. and Loperfido, N. (2014). Improved approximation of the sum of random vectors by the skew normal distribution. Journal of Applied Probability, 51(2):466–482. DasGupta, A. (2008). Asymptotic Theory of Statistics and Probability. Springer Texts in

Statistics. Springer.

De Luca, G. and Loperfido, N. (2015). Modelling multivariate skewness in financial returns: a SGARCH approach. The European Journal of Finance, 21:1113–1131.

Efron, B. (2006). Minimum volume confidence regions for a multivariate normal mean vector. Journal of the Royal Statistical Society: Series B, 68:655–670.

Fan, J., Fan, Y., and Lv, J. (2008). High dimensional covariance matrix estimation using a factor model. Journal of Econometrics, 147:186–197.

Fan, J., Liao, Y., and Mincheva, M. (2013). Large covariance estimation by thresholding prin-cipal orthogonal complements. Journal of the Royal Statistical Society: Series B, 75:603–680. Gupta, A. and Nagar, D. (2000). Matrix Variate Distributions. Chapman & Hall/CRC.

Jorion, P. (1986). Bayes-stein estimation for portfolio analysis. Journal of Financial and Quantative Analysis, 21:279–292.

Kotsiuba, I. and Mazur, S. (2015). On the asymptotic and approximate distributions of the product of an inverse Wishart matrix and a Gaussian random vector. Theory of Probability and Mathematical Statistics, 93:96–105.

Kozubowski, T. J., Podg´orski, K., and Rychlik, I. (2013). Multivariate generalized Laplace distribution and related random fields. Journal of Multivariate Analysis, 113:59–72.

Liseo, B. and Loperfido, N. (2003). A Bayesian interpretation of the multivariate skew-normal distribution. Statistics & Probability Letters, 61:395–401.

Liseo, B. and Loperfido, N. (2006). A note on reference priors for the scalar skew-normal distribution. Journal of Statistical Planning and Inference, 136:373–389.

Loperfido, N. (2010). Canonical transformations of skew-normal variates. TEST, 19:146–165. L¨utkepohl, H. (1996). Handbook of Matrices. New York: John wiley & Sons.

(25)

Marˇcenko, V. A. and Pastur, L. A. (1967). Distribution of eigenvalues for some sets of random matrices. Sbornik: Mathematics, 1:457–483.

Mathai, A. and Provost, S. B. (1992). Quadratic Forms in Random Variables. Marcel Dekker. Muirhead, R. J. (1982). Aspects of Multivariate Statistical Theory. Wiley, New York.

Potthoff, R. F. and Roy, S. N. (1964). A generalized multivariate analysis of variance model useful especially for growth curve problems. Biometrika, 51(3/4):313–326.

Provost, S. and Rudiuk, E. (1996). The exact distribution of indefinite quadratic forms in noncentral normal vectors. Annals of the Institute of Statistical Mathematics, 48:381–394. Rao, C. R. (1965). The theory of least squares when the parameters are stochastic and its

application to the analysis of growth curves. Biometrika, 52(3/4):447–458.

Silverstein, J. W. (1995). Strong convergence of the empirical distribution of eigenvalues of large-dimensional random matrices. Journal of Multivariate Analysis, 55:331–339.

Stein, C. (1956). Inadmissibility of the usual estimator of the mean of a multivariate nor-mal distribution. In Neyman, J., editor, Proceedings of the third Berkeley symposium on mathematical and statistical probability. University of California, Berkley.

Wang, C., Pan, G., Tong, T., and Zhu, L. (2015). Shrinkage estimation of large dimensional precision matrix using random matrix theory. Statistica Sinica, 25:993–1008.

Taras Bodnar, Department of Mathematics, Stockholm University, SE-10691 Stockholm, Swe-den. E-mail address: taras.bodmar@math.su.se

Stepan Mazur, Department of Statistics, School of Business, ¨Orebro University, SE-70182 ¨Orebro, Sweden. E-mail address: stepan.mazur@oru.se

Corresponding author

Nestor Parolya, Institute of Statistics, Faculty of Economics and Management, Leibniz Univer-sity Hannover, D-30167 Hannover, Germany. E-mail address: parolya@statistik.uni-hannover.de

(26)

6

Appendix

Proof of Proposition 1.

Proof. Straightforward but tedious calculations give

fX(Z) = C−1 Z Rq+ fNp,n(µ1> n,Σ⊗In)(Z − Bν ∗1> n)fNq(0,Ω)(ν ∗)dν∗ = C−1(2π) −(np+q)/2 |Ω|1/2|Σ|n/2 Z Rq+ exp  −1 2ν ∗>−1ν∗ × exp  −1 2vec Z − µ1 > n − Bν ∗1> n > (In⊗ Σ)−1vec Z − µ1>n − Bν ∗1> n   dν∗ = C−1(2π) −(np+q)/2 |Ω|1/2|Σ|n/2 exp  −1 2vec(Z − µ1 > n)>(In⊗ Σ)−1vec(Z − µ1>n)  × exp 1 2vec(Z − µ1 > n)>E>DEvec(Z − µ1>n)  × Z Rq+ exp  −1 2 h ν∗− DEvec(Z − µ1> n) > D−1 ν∗− DEvec(Z − µ1> n) i  dν∗ = C−1|F| 1/2|D|1/2 |Ω|1/2|Σ|n/2Φq 0; −DEvec(Z − µ1 > n), D φpn vec(Z − µ1>n); 0, F  = Ce−1Φq 0; −DEvec(Z − µ1>n), D φpn vec(Z − µ1>n); 0, F  where D = (nB>Σ−1B + Ω−1)−1, E = 1>n ⊗ B>Σ−1, F = (I n⊗ Σ−1− E>DE)−1, and e C−1 = C−1|F| 1/2|D|1/2 |Ω|1/2|Σ|n/2.

(27)

(a) p = 50, n = 500, ν ∼ T Nq(0, Iq). (b) p = 100, n = 1000, ν ∼ T Nq(0, Iq).

(c) p = 50, n = 500, ν ∼ GALq(1q, Iq, 10). (d) p = 100, n = 1000, ν ∼ GALq(1q, Iq, 10).

Figure 1: The kernel density estimator of the asymptotic distribution as given in Theorem 1 for c = 0.1.

(28)

(a) p = 250, n = 500, ν ∼ T Nq(0, Iq). (b) p = 500, n = 1000, ν ∼ T Nq(0, Iq). −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (c) p = 250, n = 500, ν ∼ GALq(1q, Iq, 10). (d) p = 500, n = 1000, ν ∼ GALq(1q, Iq, 10).

Figure 2: The kernel density estimator of the asymptotic distribution as given in Theorem 1 for c = 0.5.

(29)

−4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (a) p = 400, n = 500, ν ∼ T Nq(0, Iq). (b) p = 800, n = 1000, ν ∼ T Nq(0, Iq). −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (c) p = 400, n = 500, ν ∼ GALq(1q, Iq, 10). (d) p = 800, n = 1000, ν ∼ GALq(1q, Iq, 10).

Figure 3: The kernel density estimator of the asymptotic distribution as given in Theorem 1 for c = 0.8.

(30)

−4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (a) p = 475, n = 500, ν ∼ T Nq(0, Iq). (b) p = 950, n = 1000, ν ∼ T Nq(0, Iq). −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (c) p = 475, n = 500, ν ∼ GALq(1q, Iq, 10). (d) p = 950, n = 1000, ν ∼ GALq(1q, Iq, 10).

Figure 4: The kernel density estimator of the asymptotic distribution as given in Theorem 1 for c = 0.95.

(31)

−4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (a) p = 50, n = 500, ν ∼ T Nq(0, Iq). (b) p = 100, n = 1000, ν ∼ T Nq(0, Iq). −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (c) p = 50, n = 500, ν ∼ GALq(1q, Iq, 10). (d) p = 100, n = 1000, ν ∼ GALq(1q, Iq, 10).

Figure 5: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for c = 0.1.

(32)

−4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (a) p = 250, n = 500, ν ∼ T Nq(0, Iq). (b) p = 500, n = 1000, ν ∼ T Nq(0, Iq). −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (c) p = 250, n = 500, ν ∼ GALq(1q, Iq, 10). (d) p = 500, n = 1000, ν ∼ GALq(1q, Iq, 10).

Figure 6: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for c = 0.5.

(33)

−4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (a) p = 400, n = 500, ν ∼ T Nq(0, Iq). (b) p = 800, n = 1000, ν ∼ T Nq(0, Iq). −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (c) p = 400, n = 500, ν ∼ GALq(1q, Iq, 10). (d) p = 800, n = 1000, ν ∼ GALq(1q, Iq, 10).

Figure 7: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for c = 0.8.

(34)

−4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (a) p = 475, n = 500, ν ∼ T Nq(0, Iq). (b) p = 950, n = 1000, ν ∼ T Nq(0, Iq). −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 0.5 Asymptotic Standard Normal (c) p = 475, n = 500, ν ∼ GALq(1q, Iq, 10). (d) p = 950, n = 1000, ν ∼ GALq(1q, Iq, 10).

Figure 8: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for c = 0.95.

Cytaty

Powiązane dokumenty

[r]

The results include representation of the determinant of a rectan- gular matrix as a sum of determinants of square matrices and description how the determinant is affected by

If large-sample prompt-gamma neutron activation analysis (LS-PGNAA) is to determine accurate element fractions of large samples with an unknown homogeneous matrix

The analysis revealed that three of the char- acteristics (managing complexity, commu- nication effectiveness, output quality) are usually perceived as advantages of

A matrix generalization of Kronecker’s lemma is presented with assumptions that make it possible not only the unboundedness of the condition number considered by Anderson and

A contrastive analysis of Polish and German audio-visual translations of idiomatic phrases in Monty Python sketches. 15

They present a survey study carried out among learners of English and German and attempt to evaluate how the process of learn- ing these languages at different stages

It's actually an international segmentation (like Mc Donald for example via his Mc Arabia or Hallal products). Consumer expectations and Apple's market share vary