• Nie Znaleziono Wyników

Substable and pseudo-isotropic processes. Connections with the geometry of subspaces of $L_α$

N/A
N/A
Protected

Academic year: 2021

Share "Substable and pseudo-isotropic processes. Connections with the geometry of subspaces of $L_α$"

Copied!
90
0
0

Pełen tekst

(1)

D I S S E R T A T I O N E S

M A T H E M A T I C A E

(ROZPRAWY MATEMATYCZNE)

K O M I T E T R E D A K C Y J N Y A N D R Z E J B I A L Y N I C K I - B I R U L A, B O G D A N B O J A R S K I, Z B I G N I E W C I E S I E L S K I, J E R Z Y L O ´S, Z B I G N I E W S E M A D E N I, J E R Z Y Z A B C Z Y K redaktor, W I E S L A W ˙Z E L A Z K O zast¸epca redaktora

CCCLVIII

J O L A N T A K. M I S I E W I C Z

Substable and pseudo-isotropic processes Connections with the geometry of

subspaces of

L

α -spaces

(2)

Institute of Mathematics Technical University of Wroc law Wybrze˙ze Wyspia´nskiego 27 50-370 Wroc law, Poland

E-mail: jolanta@graf.im.pwr.wroc.pl

Published by the Institute of Mathematics, Polish Academy of Sciences Typeset in TEX at the Institute

Printed and bound by

&

P R I N T E D I N P O L A N D

c

Copyright by Instytut Matematyczny PAN, Warszawa 1996

(3)

I. Introduction . . . 5

II. Pseudo-isotropic random vectors . . . . 9

II.1. Symmetric stable vectors . . . . 9

II.2. Pseudo-isotropic random vectors . . . 15

II.3. Elliptically contoured vectors . . . 23

II.4. α-symmetric random vectors . . . 27

II.5. Substable random vectors . . . 32

III. Exchangeability and pseudo-isotropy . . . 35

III.1. Pseudo-isotropic exchangeable sequences . . . 35

III.2. Schoenberg-type theorems . . . 40

III.3. Some generalizations . . . 43

IV. Stable and substable stochastic processes . . . 45

IV.1. Gaussian processes and Reproducing Kernel Hilbert Spaces . . . 45

IV.2. Elliptically contoured processes . . . 47

IV.3. Symmetric stable stochastic processes . . . 50

IV.4. Spectral representation of symmetric stable processes . . . 56

IV.5. Substable and pseudo-isotropic stochastic processes . . . 59

IV.6. Lα-dependent stochastic integrals . . . 62

IV.7. Random limit theorems . . . 63

V. Infinite divisibility of substable stochastic processes . . . 64

V.1. Infinitely divisible distributions. L´evy measures . . . 66

V.2. Approximative logarithm . . . 68

V.3. Infinite divisibility of substable random vectors . . . 73

V.4. Infinite divisibility of substable processes . . . 77

References . . . 80

Index . . . 90

1991 Mathematics Subject Classification: Primary 46A15, 60B11; Secondary 60G07, 46B20, 60E07, 60K99.

(4)

This paper is devoted to a problem which can be expressed in a very simple way in elementary probability theory. Consider a symmetric random vector X = (X1, X2)

taking values in R2. For every line ℓ in R2passing through the origin we define a random

variable Πℓ(X) which is the orthogonal projection of X onto ℓ. In other words, Πℓ(X) =

heℓ, Xi, where eℓis a unit vector contained in the line ℓ, and h·, ·i denotes the usual inner

product in R2. The problem is to characterize all random vectors X with the property

that all orthogonal projections Πℓ(X) are equal in distribution up to a scale parameter.

Equivalently, we can say that with every line ℓ is associated a positive constant c(ℓ) such that Πℓ(X) has the same distribution as c(ℓ) · X1. Random vectors having this property

are called pseudo-isotropic.

The existence of such random vectors is evident: it is enough to notice that multidi-mensional symmetric Gaussian random vectors and multidimultidi-mensional symmetric stable vectors are pseudo-isotropic. Another example is given by a random vector uniformly distributed on the unit sphere in R2. In this case we can see that the distribution of the

orthogonal projection does not depend on the direction of the diameter on which we are projecting.

In analytical description of the problem we want to characterize all symmetric random vectors X = (X1, X2) such that their characteristic function ϕX(ξ1t, ξ2t) is the same

as the characteristic function ϕX1(c(ξ1, ξ2)t) of the random variable c(ξ1, ξ2) · X1. The

function c : R2→ [0, ∞) has to be one-homogeneous. The equivalence easily follows from

the equality

ξ1X1+ ξ2X2=

q ξ2

1+ ξ22· Πℓ(X),

where ℓ is a line passing through the origin, containing the unit vector eℓ=  ξ1 p ξ2 1+ ξ22 ,p ξ2 ξ2 1+ ξ22  , and c(ξ1, ξ2) = q ξ2 1+ ξ22· c(ℓ).

The considered property of random vectors can be easily generalized to random vectors taking values in spaces bigger than R2. In the paper we consider pseudo-isotropic random

vectors taking values in Rn, isotropic sequences of random variables and

pseudo-isotropic stochastic processes. Pseudo-pseudo-isotropic random vectors taking values in infinite dimensional linear spaces are not mentioned in the paper, but the interested reader can find a lot of useful information in the references.

By studying pseudo-isotropic random vectors and stochastic processes we do not only want to enrich the class of distributions with properties which are interesting and

(5)

co-nvenient for calculations. We also want to propose another method of studying symme-tric stable random vectors and processes which are ssymme-trictly connected with the idea of pseudo-isotropy. Firstly, the scale parameter c(ξ1, ξ2) can be given by the Lα-norm, for

some α ∈ (0, 2], i.e. there may exist a linear operator ℜ : R2 → L

α(S, Σ, ν) for some

measure space (S, Σ, ν) such that

(∗) c(ξ1, ξ2) = kℜ(ξ1, ξ2)kα.

This means that the characteristic function of a pseudo-isotropic random vector X can be of the form ϕ(kℜ(ξ1, ξ2)kα), while the characteristic function of a symmetric α-stable

random vector is of the form exp{−kℜ(ξ1, ξ2)kαα}. Secondly, representation (∗) holds

under a very weak condition on the pseudo-isotropic random vector X: it is enough to assume that there exists ε > 0 such that E|X1|ε < ∞. Thirdly, every known function

c : R2 → [0, ∞) which can appear in the definition of a pseudo-isotropic random vector

admits a representation (∗) for some α ∈ (0, 2].

Historically, people considered 1938, the year that Schoenberg published three papers on spherically symmetric random vectors and completely monotonic functions (see [215]– [217]), as the beginning of the investigation of pseudo-isotropic random vectors. We should remember, however, that even earlier, in 1920’s and 1930’s, Paul L´evy and Aleksandr Yakovlevich Khinchin had introduced the idea of stable random variables and vectors, and also that the beginning of the investigations of Gaussian random vectors goes back to the beginning of the 18th century.

Note that spherically symmetric random vectors have a very special property. Namely, both the characteristic function and the multidimensional density function (if the latter one exists) are constant on spheres centered at the origin. This was the reason why the theory developed by Schoenberg broke into two parts. The first one, which we call pseudo-isotropy, describes random vectors and processes having all one-dimensional projections the same up to a scale parameter, which is equivalent with the fixed geometry of level curves for the characteristic function. The second one, only occasionally appearing in this paper, describes random vectors and processes with fixed geometry of level curves for multidimensional density functions and is usually connected with the de Finetti theorem where the exchangeability σ-field is specified by the geometry.

The spherically symmetric random vectors and processes were extensively studied and turned out to be very useful in statistics and probability theory. Slightly generalizing the definition to the vectors which are linear images of spherically symmetric random vectors, mathematicians were considering elliptically contoured random vectors and spherically generated random vectors. We shall also mention here that well-known sub-Gaussian random vectors and stochastic processes in both possible definitions (i.e. random vectors which are mixtures of a symmetric Gaussian random vector, and random vectors having all weak moments proportional to the corresponding moments of some Gaussian random vector) are in fact elliptically contoured. More about the history of elliptically contoured random vectors and processes can be found in [44] and [172].

In 1967 Bretagnolle, Dacunha-Castelle and Krivine (see [29]) proved that all positive definite norm dependent functions on infinite-dimensional Lα-spaces are mixtures of

(6)

the only positive definite norm-dependent function on infinite-dimensional Lα-space is

constant, so the corresponding pseudo-isotropic random vector is zero everywhere. This paper was important for the theory because for the first time symmetric α-stable ran-dom vectors appeared not only as an example of pseudo-isotropic ranran-dom vectors, but it was shown that in some cases pseudo-isotropic random vectors have to be mixtures of symmetric stable vectors.

In 1976 Christensen and Ressel (see [47]) showed that a positive definite norm de-pendent function on an infinite-dimensional Banach space has to be a mixture of the function exp{−kξk2}. This result, giving only the necessary condition, is weaker than

the previous one, because only on a Hilbert space the function exp{−kξk2} is positive

definite (and defines the characteristic function of some Gaussian random vector). In 1988 [162] (see also [172], 1990) Misiewicz generalized this result and proved that a posi-tive definite quasi-norm-dependent function on a linear space containing ℓn

α’s uniformly

must be a mixture of the function exp{−kξkα}. All these results connect the theory of

pseudo-isotropic random vectors and processes with the well-developed geometry of vec-tor spaces, the idea of spaces having a stable type or cotype. In this paper we put a lot of attention to the connections between the geometrical properties of the Reproducing Kernel Space (defined for pseudo-isotropic random vectors in analogy to the Reproducing Kernel Hilbert Space, well known object in the theory of Gaussian random vectors) and some properties, especially infinite divisibility, of pseudo-isotropic random vector.

Once you have noticed that the uniform distribution on the unit sphere in Rn is

spherically symmetric it is not difficult to see that all spherically symmetric distributions on Rn are mixtures of this special one. For a long time it was the only known

the-orem characterizing pseudo-isotropic random vectors. When giving this characterization Schoenberg also gave a list of questions on existence and on characterizing theorems of pseudo-isotropic random vectors with different types of quasi-norms. It turned out that even for partial answers we had to wait for more than 50 years.

Let me remind here the main of Schoenberg’s questions (see [217]):

1. Let p > 2, n > 2; does there exist a positive number α such that exp{−kξkα p} is

positive definite on Rn?

2. Does there exist a positive number α such that exp{−kξkα

∞} is positive definite on

Rn?

The questions can be reformulated in the following way: does there exist α ≤ 2 such that ℓn

p, p > 2, (ℓn∞ respectively) embeds isometrically into some Lα-space? The

equiva-lence of these two formulations follows immediately from the L´evy spectral theorem for characteristic functions of symmetric stable random vectors taking values in Rn(see [138],

§63). The restriction α < 2 was already known to Schoenberg (see [217], §5). There were many attempts to solve these problems. In 1976 Dor [52] showed that for α > 1, ℓ2

p

em-beds isometrically into Lα-space if and only if α ≤ p ≤ 2 or p = 2. The first Schoenberg

problem was finally solved in 1991 by Koldobsky [121] who showed that ℓn

p does not

embed into any Lα-space if n ≥ 3, p > 2, α ≤ 2.

In 1989 Misiewicz [163] showed that if a random vector taking values in R3is

(7)

probability one (this answers negatively also the second Schoenberg’s question). In 1991 Zastawny [245], and independently Lisitsky [143] showed that if a pseudo-isotropic ran-dom vector depends on the ℓα-norm in R3for some α > 2, then it is zero with probability

one. These results are quite important as the first finite-dimensional examples of spaces on which every norm-dependent positive definite function must be constant. The exam-ples known earlier were spaces connected with the result of Bretagnolle, Dacunha-Castelle and Krivine [29], i.e. spaces containing at least ℓn

α’s uniformly for some α > 2.

Another problem lies in finding a full characterization of different types of pseudo-isotropic random vectors on Rn. In fact, except spherically symmetric (and consequently,

elliptically contoured) random vectors, the full characterization is known only in the case of ℓ1-symmetric random vectors, i.e. pseudo-isotropic random vectors with the quasi-norm

c(ξ) =P|ξk|. This result, given by Cambanis, Keener and Simons in 1983 (see [38]), is

based on a very interesting integral formula

π/2 \ 0 f  x2 sin t+ y2 cos t  dt = π/2 \ 0 f  (|x| + |y|)2 sin t  dt,

which holds for every measurable function f for which one of these integrals exists. They showed that every ℓ1-symmetric random vector in Rn must be of the form

(U1/

p

D1, . . . , Un/

p

Dn) · Θ,

where U = (U1, . . . , Un) has a uniform distribution on the unit sphere in Rn, D =

(D1, . . . , Dn) has Dirichlet distribution with parameters (12, . . . ,12), Θ is a non-negative

random variable, and U , D and Θ are independent.

The question of characterization of all other kinds of pseudo-isotropic random vectors in Rn remains open. In 1985 Richards (see [199], [200]) proposed a method for solving

this problem but he has got only some partial results for the general representation of pseudo-isotropic random vectors. Also in the paper of Misiewicz and Richards [171] there are only necessary conditions for ℓα-symmetric random vectors. In fact, it has not been

even shown till now that if we consider the set Mn(c) of all pseudo-isotropic distributions

on Rnwith the fixed quasi-norm c, then there exists one type of distribution, say µ 0(like

the uniform distribution on the unit sphere in Rn for the quasi-norm c(ξ) = kξk

2), such

that every µ ∈ Mn(c) is a scale mixture of µ0, even though we know that Mn(c) is a

convex, weakly closed set. However, we show in the paper that for every quasi-normed space (E, c) the set of pseudo-isotropic random vectors with the characteristic functions of the form ϕ(c(ξ)) is a weak closure of the set of convex linear combinations of its extreme points.

There is still another open problem. Every known quasi-norm c, admissible for non-trivial pseudo-isotropic random vectors, is given by formula (∗) for some α ∈ (0, 2], i.e. c is an Lα-norm. But maybe it is possible to obtain an admissible quasi-norm which cannot

be given by this formula? For now, we do not even know any method of studying this problem.

This paper is meant as a review of the theory of pseudo-isotropic random vectors and stochastic processes. However, we do not give here proofs of all the results, unless they are either important for further considerations, are completely new or when the original

(8)

paper could be difficult to find. Most attention, however, was paid to the connections between properties of pseudo-isotropic random vectors and processes, and the geometrical properties of their Reproducing Kernel Spaces.

Acknowledgements. The author would like to express her gratitude to Professor Czes law Ryll-Nardzewski for many inspiring discussions and critical remarks.

II. Pseudo-isotropic random vectors

In this chapter we give the basic properties of pseudo-isotropic random vectors, inc-luding known special cases. Some of the results given here are new. We give also a list of open questions in this area.

II.1. Symmetric stable vectors. Stable random variables and vectors play a crucial role in probability theory. Their investigations were started in 1920’s and in 1930’s by Paul L´evy and Aleksandr Yakovlevich Khinchin. The literature on this topic is very rich; see, e.g., P. L´evy [138], Gnedenko and Kolmogorov [72], Zolotarev [247], Ibragimov and Linnik [96]. Just recently appeared two books completely devoted to stable stochastic processes: one of Samorodnitsky and Taqqu [212], the second of Janicki and Weron [98]. Also recently, Ledoux and Talagrand published a book “Probability in Banach Spaces” (see [134]), where stable random variables, vectors and processes are shown to play an important role in structure theorems for Banach spaces.

In this section we concentrate only on that part of the theory of stable distribu-tions which will be useful in the theory of pseudo-isotropic random vectors. Namely, we give here the basic properties and representation theorems for symmetric stable random vectors. We give also some basic properties of the standard strictly positive α-stable ran-dom variables Θα. Finally, we construct the reproducing kernel space for a symmetric

α-stable random vector in Rn in analogy to the reproducing kernel Hilbert space for

Gaussian random vectors. All symmetric α-stable vectors which appear in the paper are non-degenerate.

Definition II.1.1. A random variable X is said to have a stable distribution if for every choice of positive numbers a and b there exists a positive number c and a constant d such that

(II.1.1) aX1+ bX2 d

= cX + d, where X1, X2are independent copies of X, and where

d

= denotes equality in distribution. A random variable X is said to have a symmetric stable distribution if it has stable distribution and P{X ∈ A} = P{X ∈ −A} for every Borel set A ⊂ R.

Theorem II.1.1. For any stable random variable X there exists a number α ∈ (0, 2] such that for every choice of positive numbersa and b there exists a constant d such that

aX1+ bX2 d

= (aα+ bα)1/αX + d, whereX1, X2 are independent copies ofX.

(9)

See Feller [66], Theorem VI.1.1 for a proof. The constant α is called the index of stability or the characteristic exponent . A stable random variable X with index α is called α-stable.

Proofs of the following basic properties of symmetric stable random variables can be easily found in almost every book on probability theory (see e.g. Feller [66]).

Consider a symmetric α-stable random variable X. As X and −X have the same distributions, it is easy to see that for every a, b ∈ R,

aX1+ bX2 d

= (|a|α+ |b|α)1/αX,

where X1 and X2 are independent copies of X. Denote by ΦX(ξ) the characteristic

function of X. Then for all a, b, ξ ∈ R,

ΦX(aξ)ΦX(bξ) = ΦX((|a|α+ |b|α)1/αξ),

and the only solution of this functional equation in the set of characteristic functions is ΦX(ξ) = exp{−A|ξ|α}

for some A > 0. The constant A, or rather the constant A1/α, is the scale parameter of

the random variable X. We denote by S(α, c) the distribution of the symmetric α-stable random variable for which A = cα.

For α = 2, the random variable X with distribution S(2, c) has all moments; i.e. X ∈ Lp for every p > 0. If 0 < α < 2 then it is not even true that the random variable

X with distribution S(α, c) belongs to Lα. However, it can be shown that in that case

lim

t→∞t αP

{|X| > t} = cαα· cα,

where cα> 0 depends only on α. Therefore, X has moments of order r for every 0 < r < α

and

(II.1.2) E|X|r= crE|Xs,α|r≡ crcrα,r,

where Xs,αis the standard symmetric α-stable random variable with distribution S(α, 1).

If α ≥ 1, then the support of any α-stable, not necessarily symmetric, random variable is equal to the whole R, but if α ∈(0, 1), then it is possible to construct a strictly positive α-stable random variable. In this paper we will need and use a special kind of such α-stable random variables; namely variables Θα, α ∈ (0, 1), such that their Laplace transform is

(II.1.3) Eexp{−ξΘα} = exp{−ξα}, ξ > 0.

It easily follows from the equality of the corresponding Laplace transforms that Θαis an

α-stable random variable. Using Bernstein’s theorem we also see that Θαis concentrated

on the positive half-line. Throughout the paper we will use the notation γ+

α for the

distribution of Θα. Only in one case the density of γα+ is given in an explicit form,

namely γ1/2+ (dx) = 1 2√π 1 √ x3e −1/4xdx, x > 0;

for details see Feller [66], Examples II.4(f) and XIII.3(b). If α ∈ (0, 1), then the density of Θα can be obtained by the inverse Fourier transform of its characteristic function.

(10)

Namely, we have γα(dx) = 1 xπ ∞ \ 0 exp  − t − tαx−αcos  π 2α  sin  tαx−αsin  π 2α  dt × dx, and the proof of this formula can be found in [96], Theorem 2.3.1(3).

DefinitionII.1.2. A random vector X = (X1, . . . , Xn) is said to be symmetric stable in Rn (notation: SαS) if P{X ∈ A} = P{X ∈ −A} for every Borel set A ⊂ Rn, and if

for every choice of a, b ∈ R there exists a positive constant c such that aX1+ bX2

d

= cX, where X1, X2 are independent copies of X.

TheoremII.1.2. Let X = (X1, . . . , Xn) be a symmetric stable random vector in Rn.

Then there existsα ∈ (0, 2] such that all linear combinations of the components of X are symmetric α-stable random variables.

P r o o f. The definition of a symmetric stable random vector X is equivalent to the following condition for its characteristic function ΦX: for every a, b ∈ R there exists a

positive constant c such that for every ξ ∈ Rn,

ΦaX(ξ)ΦbX(ξ) = ΦcX(ξ).

Put ξ = (ξ1, 0, . . . , 0) ∈ Rn. Then the characteristic function ΦX1 of the first component

X1 of the random vector X has the property ΦaX1(ξ1)ΦbX1(ξ1) = ΦcX1(ξ1), which means

that X1is a stable random variable with some index of stability α ∈ (0, 2]. Evidently, X1

is symmetric as a component of a symmetric random vector X, hence cα = |a|α+ |b|α.

Consider now a random variable Y = hξ, Xi =Pnk=1ξkXk. Calculating the characteristic

function ΦY of Y we get

ΦaY(t)ΦbY(t) = ΦaX(tξ)ΦbX(tξ) = ΦcX(tξ) = ΦcY(t),

which means that the random variable Y is stable. As Y is a linear combination of the components of a symmetric random vector X, it is also symmetric. If the index of stability for Y is β, then cβ= |a|β+ |b|β. Comparing with the stability of X

1 we have

(|a|β+ |b|β)1/β= (|a|α+ |b|α)1/α

for every a, b ∈ R; this, however, is possible only if α = β. Now, Y is a symmetric α-stable random variable and thus there exists a positive constant c(ξ) such that

ΦY(t) = exp{−c(ξ)α|t|α}, t ∈ R.

CorollaryII.1.1. If every linear combination of the components of the random vector X = (X1, . . . , Xn) in Rn is symmetricα-stable then X is symmetric α-stable.

The next two theorems are known in the literature as the L´evy spectral representation for symmetric stable random vectors in Rn (see [138], §63). In the language of geometry of Banach spaces they can be expressed as follows:

LetX be a finite-dimensional linear space equipped with a quasi-norm, i.e. continuous function c : X → [0, ∞) such that c(x) = 0 ⇔ x = 0 and c(tx) = |t|c(x) for every

(11)

t ∈ R, x ∈ X. Then the function exp{−c(x)α} is positive definite on X if and only if

α ≤ 2 and (X, c) embeds isometrically into some Lα-space.

L´evy was using finite measures ν on the unit sphere Sn−1⊂ Rn (so his “embedding”

was taken into the space Lα(Sn−1, ν) with the correspondence x ↔ hx, yi) in order to

obtain uniqueness of the representation. For the purpose of this paper, however, it is better to omit this restriction.

Theorem II.1.3. For every positive finite symmetric measure ν on Rn such that

T

. . .

T

|hξ, xi|αν(dx) < ∞ for every ξ = (ξ

1, . . . , ξn) ∈ Rn, the formula Φ(ξ) = expn− \ . . . \ Rn |hξ, xi|αν(dx)o, ξ ∈ Rn,

defines the characteristic function of some symmetric α-stable random vector X = (X1, . . . , Xn) on Rn.

P r o o f. Let ν be a positive finite symmetric measure on Rn. If the characteristic function of the random vector X = (X1, . . . , Xn) is given by the function Φ, then

evi-dently X is symmetric α-stable. So we only need to show that the function Φ is indeed a characteristic function of some random vector. To show this, let us define a family of probability measures Exp(mε) = exp(−mε(Rn)) ∞ X k=1 m∗k ε k! , where mε(A) ≡ a−1 ∞ \ ε ν(A/s)s−α−1ds, for every Borel set A ⊂ Rn, and where the constant a is defined by

a =

∞\

0

(1 − cos s)s−α−1ds.

It is easy to see now that Φ is the characteristic function of the probability measure which is the weak limit of the probability measures Exp(mε) as ε ց 0, because

lim εց0 \ . . . \ Rn

eihξ,xiExp(mε)(dx) = lim εց0exp n − \ . . . \ Rn (1 − coshξ, xi) mε(dx) o = lim εց0exp n − a−1 \ . . . \ Rn |hξ, xi|αν(dx) ∞ \ ε (1 − cos s)s−α−1dso = expn− \ . . . \ Rn |hξ, xi|αν(dx)o.

TheoremII.1.4. If a random vector X = (X1, . . . , Xn) on Rn is symmetricα-stable, then there exists a positive finite measure ν on Rn such that

Eeihξ,xi= expn

\

. . .

\

Rn

(12)

P r o o f. We will follow here the proof given by Ledoux and Talagrand in [134]. Recall that if the random variable Y has a symmetric α-stable distribution S(α, c) and r < α then E|X|r= cr

α,rcr(see formula (II.1.2)). It follows that for every ξ = (ξ1, . . . , ξn) ∈ Rn,

Eexpni n X k=1 ξkXk o = exp{−c(ξ)α} = expnc−αα,r  E n X k=1 ξkXk rα/ro ,

where c(ξ) is the scale parameter for the random variablePnk=1ξkXk. For every r < α

define then a positive finite measure mr on the unit sphere S = S∞n−1 for the sup-norm

k · k∞on Rn by setting, for every bounded measurable function ϕ on S,

\ . . . \ S ϕ(y) mr(dy) = c−rα,r \ . . . \ Rn ϕ(x/kxk∞)kxkr∞PX(dx),

where PX is the distribution of X = (X1, . . . , Xn). Hence for any ξ = (ξ1, . . . , ξn) ∈ Rn,

Eexpni n X k=1 ξkXk o = expn− \ . . . \ S n X k=1 ξkxk r mr(dx) α/ro . Now the total mass |mr| of the measure mr is easily seen to be majorized by

|mr| ≤ h inf x∈S n X k=1 |xk|r i−1 nX k=1 c(ek)r,

where ek, 1 ≤ k ≤ n, are the unit vectors of Rn. Therefore supr<α|mr| < ∞. Let m be a

cluster point (in the ∗-weak sense) of {mr: r < α}; m is a positive finite measure which

is clearly the spectral measure of X.

R e m a r k II.1.1. It is easy to see that for every finite positive symmetric measure ν on Rn such that

T

. . .

T

|hξ, xi|αν(dx) < ∞ for every ξ = (ξ

1, . . . , ξn) ∈ Rn we can construct

a finite positive symmetric measure ν1 on Sn−1= {x ∈ Rn:Pnk=1x2k= 1} such that for

every ξ ∈ Rn, \ . . . \ Rn |hξ, xi|αν(dx) = \ . . . \ Sn−1 |hξ, xi|αν1(dx).

It is enough to use spherical variables and integrate out the radial part. If the character-istic function of a symmetric α-stable random vector X is

expn− \ . . . \ Sn−1 |hξ, xi|αν(dx)o,

then the symmetric measure ν is called the spectral measure of X. For 0 < α < 2 the spectral measure of a symmetric α-stable random vector is uniquely determined.

ExampleII.1.1. A random vector (X1, . . . , Xn) is symmetric Gaussian if there exists a symmetric positive definite n × n-matrix R such that the characteristic function

Eexpni n X k=1 ξkXk o = exp  −12hξ, Rξi  . This means that for every ξ ∈ Rnthe random variablePn

k=1ξkXkhas the same

distribu-tion as (hξ, Rξi)1/2X

0, where the random variable X0has distribution N (0, 1). It is easy

(13)

symmetric positive definite n × n-matrix there exists a finite positive measure ν on Sn−1 such that 1 2hξ, Rξi = \ . . . \ Sn−1 |hξ, xi|2ν(dx), ξ ∈ Rn.

However, in the case of symmetric Gaussian random vectors the spectral measure ν is not uniquely determined; we have e.g.

n X k=1 ξk2= \ . . . \ Sn−1 |hξ, xi|2·12 n X k=1 (δek+ δ−ek) (dx) = \ . . . \ Sn−1 |hξ, xi|2c λ(dx),

where ek = (0, . . . , 0, 1, 0, . . . , 0), λ is the uniform distribution on the unit sphere Sn−1

and c is a suitable constant.

Example II.1.2. If the spectral measure of a symmetric α-stable random vector (X1, . . . , Xn) is of the form ν(dx) = 1 2 n X k=1 ak(δek+ δ−ek)(dx)

for some positive constants a1, . . . , an, then the characteristic function of (X1, . . . , Xn)

can be written as ΦX(ξ) = exp n − n X k=1 ak|ξk|α o .

It is easy to see that in this case X has independent components. The opposite implication also holds, i.e. if a symmetric α-stable random vector X has independent components, then its spectral measure is of the form ν(dx) = 12Pnk=1ak(δek+ δ−ek)(dx), for some

positive constants a1, . . . , an.

Example II.1.3. Let X = (X1, . . . , Xn) be a symmetric α-stable random vector on Rnwith spectral measure ν and let Θβ, where β ∈ (0, 1), be independent of X. Consider the random vector Y = XΘβ1/α. The characteristic function of Y is of the form

Eeihξt,Y i= E expn− Θβ|t|α \ . . . \ Sn−1 |hξ, xi|αν(dx)o = expn− |t|αβ \ . . . \ Sn−1 |hξ, xi|αν(dx)βo,

for every ξ ∈ Rn and t ∈ R, which means that all linear combinations of components of

Y are symmetric (αβ)-stable random variables. From Corollary II.1.1 we see that the random vector Y is also symmetric (αβ)-stable, so by Theorem II.1.4 there exists a finite positive measure ν1 on Sn−1 such that

Eeihξ,Y i= expn− \ . . . \ Sn−1 |hξ, xi|αβν1(dx) o .

Finally, we see that for every α ∈ (0, 2], κ < α and every finite positive measure ν on Sn−1 there exists a finite positive measure ν

1 on Sn−1such that \ . . . \ Sn−1 |hξ, xi|αν(dx)1/α= \ . . . \ Sn−1 |hξ, xi|κν1(dx) 1/κ . To see this, it is enough to put β = κ/α in the previous considerations.

(14)

For a symmetric α-stable random vector X with spectral measure ν let ℜ : Rn

Lα(Sn−1, ν) be a linear operator defined by the formula ℜ(ξ) = hξ, xi. The characteristic

function of the random vector X can now be written in the following, slightly more convenient way: (II.1.4) Eexpni n X k=1 ξkXk o = exp{−kℜ(ξ)kα α}.

Define now a linear space H(X ) as follows:

H(X ) = ℜ(Rn) = {hξ, xi : ξ ∈ Rn} ⊂ Lα(Sn−1, ν).

As we have seen in Example II.1.3, for every κ < α there exists a finite positive symmetric measure ν1 on Sn−1such that H(X ) embeds isometrically into Lκ(Sn−1, ν1). It can also

happen that a given space H(X ) connected with the symmetric α-stable random vector embeds isometrically into some Lβ-space for some β > α. By α◦ = α◦(ℜ) we will denote

the following constant:

α◦= sup{β ∈ (0, 2] : H(X ) embeds isometrically into some Lβ-space}.

Evidently α◦ ≥ α. The geometrical properties of the space H(X ) will be the subject

of extensive studies in Chapters V and VI. We mention that for a symmetric Gaussian random vector X, the space H(X ) is called the Reproducing Kernel Hilbert Space, and it is indeed the Hilbert space defined on Rn by the inner product

hξ, ηi = E(hξ, Xihη, Xi) = E

n X j,k=1 ξjηkXjXk  , ξ, η ∈ Rn.

II.2. Pseudo-isotropic random vectors. In this section we give the definition and basic properties of pseudo-isotropic random vectors. The concept of pseudo-isotropic vectors appeared as a natural generalization of spherically invariant vectors, elliptically contoured vectors, α-symmetric vectors (see Cambanis, Keener and Simons [38]) or norm-dependent vectors (see e.g. Bretagnolle, Dacunha-Castelle and Krivine [29]). All these kinds of random vectors were extensively studied, and will be described in Sections II.3 and II.4. The term pseudo-isotropic distributions was introduced by Misiewicz and Schef-fer in 1992 (see [172]). The same generality of definition was obtained by Eaton [58], [59] when defining random variables (probability measures on R) with n-dimensional versions; however, it was hard to talk in this language about stochastic processes. We notice that in spite of the generality of the definition, no example is known yet of a pseudo-isotropic random vector which is not Lα-norm-symmetric for some α ∈ (0, 2].

Definition II.2.1. A symmetric random vector (X1, . . . , Xn) is said to be pseudo-isotropic if for every ξ ∈ Rn, ξ 6= 0, there exists a positive constant c(ξ) satisfying

n X k=1 ξkXk d = c(ξ)X1,

where= denotes equality of distributions. Similarly, a symmetric probability measure (ord a symmetric σ-finite measure) µ on Rn is said to be pseudo-isotropic if for every ξ ∈ Rn,

(15)

ξ 6= 0, there exists a positive constant c(ξ) such that for every Borel set A ⊂ R, µnx ∈ Rn: n X k=1 ξkxk ∈ A o = µ{x ∈ Rn: c(ξ) · x1∈ A}.

R e m a r k II.2.1. Clearly, a single point mass at the origin is pseudo-isotropic, and mass at the origin can be added to or removed from a pseudo-isotropic measure without destroying pseudo-isotropy. We call a pseudo-isotropic measure pure if it gives no mass to the origin.

R e m a r k II.2.2. Symmetric measures on a single line through the origin are not pseudo-isotropic on Rn (except for masses at the origin only). However, if µ is such

a measure then every its one-dimensional orthogonal projection is obtained by a non-negative homothety Ta, a ≥ 0, from µ, where Taµ(B) = µ(B/a), T0µ = δ0µ(R).

R e m a r k II.2.3. If X = (X1, . . . , Xn) is a symmetric α-stable random vector with

spectral measure ν, then it is pseudo-isotropic with the function c given by c(ξ)α= kℜ(ξ)kα

α,

for the linear operator ℜ : Rn→ L

α(Sn−1, ν) defined as ℜ(ξ) = hξ, xi.

R e m a r k II.2.4. The definition of pseudo-isotropic random vector reminds the fol-lowing definition (equivalent for symmetric variables to Definition II.1.1) of symmetric stable random variable X: for every n ∈ N and every choice of ξ = (ξ1, . . . , ξn) ∈ Rn

there exists a positive constant c = c(ξ) such that

n X k=1 ξkXk d = c(ξ)X, where X1, . . . , Xn are independent copies of X.

Indeed, if X = (X1, . . . , Xn) is pseudo-isotropic, then Xk = c(ed k)X1, so without

loss of generality we can assume that the Xk’s are identically distributed. Thus we

can say that X1, the one-dimensional projection of a pseudo-isotropic random vector

X = (X1, . . . , Xn) fulfills the nth condition of the above definition with the random

variables X1, . . . , Xn being not necessarily independent copies of the random variable

X1. Equivalently, we can say that if the pseudo-isotropic random vector has at least two

independent components then it is symmetric α-stable for some α ∈ (0, 2].

R e m a r k II.2.5. Denote by ϕ(t), t ∈ R, the characteristic function of the first compo-nent X1of the pseudo-isotropic random vector X = (X1, . . . , Xn). Then the characteristic

function of X at the point ξ = (ξ1, . . . , ξn) is of the form

Eexp{ihξ, Xi} = E exp{ic(ξ)X1} = ϕ(c(ξ)),

so it has the same level curves as the function c. Moreover, if ϕ(c(ξ)) is the characteristic function of a non-degenerate pseudo-isotropic random vector X = (X1, . . . , Xn) then

there exists another pseudo-isotropic random vector Y = (Y1, . . . , Yn) with characteristic

function ψ(c(ξ)) for which the function ψ considered on [0, ∞) is positive, decreasing and one-to-one. To see this consider for example Y = X ·Θ, where Θ has a symmetric Cauchy

(16)

distribution, E exp{itΘ} = exp{−|t|}, X and Θ independent. Indeed,

Eexp{ihξ, Y i} = E exp{ic(ξ)X1Θ} = E exp{−|t| · |X1|} =: ψ(c(ξ)).

Proposition II.2.1 (Misiewicz and Scheffer [172], §4.1). Assume that the charac-teristic function of a pseudo-isotropic vector X = (X1, . . . , Xn) can be written in two

different waysϕ1(c1(ξ)) and ϕ2(c2(ξ)). Then there exists a positive constant a such that

c1(ξ) = ac2(ξ) and ϕ1(t) = ϕ2(t/a).

The basic properties of the function c appearing in the definition of a pseudo-isotropic random vector are described in the following:

Theorem II.2.1 (Misiewicz and Ryll-Nardzewski [170]). If (X1, . . . , Xn) is a pseudo-isotropic random vector with fixed functionc, then:

1) c(tξ) = |t|c(ξ) for every ξ ∈ Rn andt ∈ R.

2) c : Rn→ [0, ∞) is a continuous function.

3) If k · k is a norm on Rn then there exist positive constants m, M such that for

every ξ ∈ Rn, mkξk ≤ c(ξ) ≤ Mkξk.

P r o o f. Without loss of generality (see Remark II.2.5) we can assume that the charac-teristic function ϕ(x) of X1is strictly decreasing on [0, ∞). Now, the first two properties

trivially follow from the definition of the function c and continuity of the corresponding characteristic function. To prove property (3) it is enough to notice that c is a continuous function on the compact set {ξ ∈ Rn: kξk = 1}.

Every function c : Rn → [0, ∞) with the properties given in Theorem II.2.1 will be

called a quasi-norm on Rn. Notice that if c is a quasi-norm on Rn then (property 3))

there exists a positive constant K such that

c(ξ + η) ≤ K(c(ξ) + c(η)) ∀ξ, η ∈ Rn.

The regularity of the level curves for the characteristic function of a pseudo-isotropic ran-dom vector imposes some regularity conditions on the distribution of its one-dimensional projections. Namely, we have:

Theorem II.2.2 (Misiewicz [164], Th. 2). Let (X1, X2) be a pseudo-isotropic random vector. Then one of the following conditions holds:

1) P{X1∈ U} > 0 for every open set U ⊂ R;

2) (X1, X2) is bounded and c(ξ)2= hξ, Rξi for some symmetric positive definite

2×2-matrixR.

P r o o f. Without loss of generality assume that c(1, 0) = 1. Assume also that there exists an open set, and consequently, an open interval (t, s) ⊂ R such that P{t < X1

< s} = 0. It follows from symmetry of the random variable X1 that we can choose t > 0.

Now, for every ξ ∈ R2,

(17)

The sets A(ξ) = {x ∈ R2: c(ξ)−1

1x1+ ξ2x2) ∈ (t, s)}, ξ ∈ R2, are open cylinders in R2

and it is easy to see that

B ≡ {x ∈ R2: kxk > Mt} ⊂[ ξ

A(ξ),

where M = sup{c(ξ) : kξk = 1, ξ ∈ R2} and k · k is the Euclidean norm on R2. To show

that P{(X1, X2) ∈ B} = 0, notice that for any compact set K ⊂ B there exists a finite

sequence ξ(1), . . . , ξ(n)∈ R2 such that K ⊂ A(ξ(1)) ∪ . . . ∪ A(ξ(n)), and we obtain

P{(X1, X2) ∈ K} ≤ n

X

k=1

P{(X1, X2) ∈ A(ξ(k))} = 0.

This means that the random vector (X1, X2) takes values in the compact set R \ B with

probability one, hence, in particular, it has a finite weak second moment, and ∞ > E|ξ1X1+ ξ2X2|2= E|c(ξ)X1|2= c(ξ)2· E|X1|2,

thus the function c(ξ) is defined by the L2-norm on R2, which ends the proof.

The next theorem was proved by M. Keane and the author in 1992, but the proof presented below has never been published (indicated by “NP”):

TheoremII.2.3. (NP) Let µ be a σ-finite measure on R2 which is pure and

pseudo-isotropic. Then, for any straight line ℓ in R2, µ(ℓ) = 0.

P r o o f. Denote by P(µ), for a measure µ on R2, the collection of measures on R

obtained by projecting µ orthogonally onto straight lines through the origin of R2,

iso-metrically identified with R. Our proof consists of two parts.

P a r t 1. For any point Q ∈ R2, µ({Q}) = 0. To see this, suppose that µ({Q}) = q > 0

for some Q ∈ R2. Since µ is pure, Q 6= (0, 0). Therefore, some projection carries over Q to

(0,0) and one of the measures in P(µ) has an atom at the origin of mass at least q. Since all measures in P(µ) are rescalings of the same measure, it follows that each measure in P(µ) has an atom of mass q at the origin. Translating back to µ yields µ(ℓ) ≥ q for each line ℓ through the origin in R2, which is impossible if µ is σ-finite, since µ({(0, 0)}) = 0.

Indeed, if Bk, k ∈ N, are such that µ(Bk) < ∞, B1⊂ B2⊂ . . . , and R2=S∞k=1Bk, then

there exists at least one k◦such that µ(B

k◦∩ ℓ) > q/2 for infinitely many lines ℓ through

the origin. But then µ(Bk◦) ≥

P

µ(Bk◦∩ ℓ) = ∞. Contradiction.

P a r t 2. For any line ℓ in R2, µ(ℓ) = 0. Suppose that µ(ℓ) = q > 0. Projecting in the

direction of ℓ yields a measure in P(µ) with an atom of mass q, not necessarily at the origin. All measures in P(µ) have an atom of mass q. This means that for each direction Θ in R2 there is a straight line ℓ

Θ in direction Θ such that µ(ℓΘ) = q > 0. Using now

Part 1 it is easy to see that this contradicts the σ-finiteness of µ, since µ({Q}) = 0 for all Q ∈ R2.

It can be easily seen from the above proposition that if µ is a σ-finite measure on Rn

which is pure and pseudo-isotropic, then µ(L) = 0 for every proper hyperplane L in Rn.

If not, then there exists an (n − 1)-dimensional hyperplane L1 with a positive measure

and then the projection T of the measure µ onto the line orthogonal to L would have an atom at the point T (L1), which is impossible.

(18)

Let c : Rn → [0, ∞) be a quasi-norm on Rn. We define M(c, n) as the set of all

symmetric probability distributions µ on R for which bµ(c(ξ)), ξ ∈ Rn, is the characteristic

function of some (of course, pseudo-isotropic) measure on Rn. It is not difficult to show

the following (for details see [170]).

Theorem II.2.4 (Misiewicz and Ryll-Nardzewski [170]). For every n ∈ N and every quasi-normc : Rn→ [0, ∞) the set M(c, n) has the following properties:

(i) if µ1, µ2∈ M(c, n), then µ1∗ µ2∈ M(c, n),

(ii) if µ1, µ2∈ M(c, n), 0 ≤ p ≤ 1, then pµ1+ (1 − p)µ2∈ M(c, n),

(iii) if {µk} ⊂ M(c, n), µk ⇒ µ, then µ ∈ M(c, n),

(iv) if there exists µ ∈ M(c, n), µ 6= δ0, then M(c, n) contains also another measure

µ16= δ0 having both the density and the characteristic function infinitely differentiable on

R\ {0}.

P r o o f. The first three properties are obvious. For (iv) it is enough to take the pseudo-isotropic random vector X = (X1, . . . , Xn) with the characteristic function bµ(c(ξ)),

ξ ∈ Rn and define µ

1 as the distribution of the one-dimensional projection of the

ran-dom vector Y = X · Θ for the ranran-dom variable Θ having, e.g., a symmetric Gaussian distribution or a symmetric Cauchy distribution.

The following proposition has not been published yet.

PropositionII.2.2. (NP) For every n ∈ N and every quasi-norm c : Rn → [0, ∞)

there exists a set of extreme pointsExtr(c, n) ⊂ M(c, n) such that (i) δ0∈ Extr(c, n),

(ii) for every µ ∈ Extr(c, n) and every a > 0 the measure Ta(µ) ∈ Extr(c, n);

(iii) the set M(c, n) is equal to the intersection of the set of all symmetric probability measures and the weak closure of the set of all convex linear combinations of elements of Extr(c, n).

P r o o f. Consider the set of measures M(c, n) as a subset of symmetric probability measures on [−∞, ∞]. By K we denote the weak closure of M(c, n). Then K is a convex, weakly closed set of measures on a compact set [−∞, ∞] so it contains the extreme points Extr(K). It is easy to see that δ0 ∈ Extr(K). It is also evident that the condition (ii)

holds for every µ ∈ Extr(K). We need to show only that if µ ∈ Extr(K) then µ ∈ M(c, n) or µ(−∞, ∞) = 0. Thus, assume that µ ∈ Extr(K). Then µ = αµ0+1−α2 (δ−∞+ δ∞),

where µ0is a symmetric probability measure on R. Let µk ∈ M(c, n) weakly converge to

µ. It is easy to see that

αbµ0(t) = lim r→∞k→∞lim r \ −r eitxµk(dx).

Let νk denote the pseudo-isotropic measure on Rn with characteristic function bµk(c(ξ)).

We need to show that (a) lim r→∞k→∞lim \ . . . \ kxk<r eihξ,xiνk(dx) = αbµ0(c(ξ)).

(19)

This equality means that the function bµ0(c(ξ)) is positive definite on Rn as a limit of

positive definite functions, thus µ0∈ M(c, n) and, consequently, α = 1 or α = 0, which

was to be shown. In order to prove (a) note that for every ξ ∈ Sn−1 we have

(b) \ . . . \ eihtξ,xi1 {|hξ,xi|<r}νk(dx) = r/c(ξ) \ −r/c(ξ) eic(tξ)xµ k(dx).

Since m < c(ξ) < M for some positive constants m, M the right hand side of (b) tends to αbµ0(c(tξ)). Let νk,lbe an l-dimensional projection of νk, l = 2, . . . , n, and let r0, k0be

large enough to have α − ε < µk([−r, r]) < α + ε for every r ≥ r0, k ≥ k0. For ξ ∈ Sn−1

we define B(r, ξ) = {x : |hξ, xi| < r0, kxk ≥ r}. If νk,2(B(r, ξ)) = p, and ξ′∈ Sn−1, then νk,2(B(r, ξ)) ≥ νk,2{|hξ′, xi| < r0} − νk,2{r0≤ |hξ, xi| ≤ r} − νk,2{|hξ, xi| ≤ r0, kxk ≤ r} ≥ α − ε − 2ε − (α + ε − p) = p − 4ε.

If N ≥ (1 − α + ε)/p and r is large enough to have at least N disjoint sets B(r, ξ1), . . .

. . . , B(r, ξN) for ξ1, . . . , ξN ∈ Sn−1, then 1 − α + ε + p ≥ νk,2{kxk ≤ r0} ≥ N X i=1 νk,2(B(r, ξi)),

and we get p ≤ 5ε. Repeating this procedure n − 1 times we see that for every l = 2, 3, . . . , n the sets {x ∈ Rl+1: kxk ≤ r} are well approximated (for large r) by cylinders

with l-dimensional basis and \ . . . \ B(r,ξ) eihtξ,xiνk(dx) ≤ νk(B(r, ξ)) ≤ Cε,

which together with (b) completes the proof of (a). Condition (iii) trivially follows from these considerations.

The next theorem, stating that every two-dimensional Banach space embeds isomet-rically into some L1-space, has been proved by several authors in different ways; see e.g.:

Ferguson 1962 [67], Herz 1963 [92], Lindenstrauss 1964 [141], Assouad 1979–1980 [14], [15] or Misiewicz and Ryll-Nardzewski 1989 [170]. We recall here an outline of the proof from [170], as the most useful for direct calculations.

Theorem II.2.5. A function ψ(t, s) = exp{−c(t, s)}, t, s ∈ R, is the characteristic function of a pseudo-isotropic, 1-stable random vector (X1, X2) if and only if the function

c(t, s) defines a norm on R2.

P r o o f. If ψ(t, s) is the characteristic function of a symmetric 1-stable random vector (X1, X2), then

c(t, s) =

\

S1

(20)

for some positive finite measure ν on S1 ⊂ R2, hence c is an L

1-norm on R2. Now we

need only prove that for every norm c(t, s) on R2there exists a finite measure ν on (0, 2π]

such that c(t, s) = 2π \ 0 |t cos ϕ + s sin ϕ| ν(dϕ). Let us define a function q as follows:

q(ϕ) = c(cos ϕ, sin ϕ),

and assume for a while that q has a continuous second derivative (this means that the norm c is smooth enough). In this case the convexity of the set {(t, s) : c(t, s) ≤ 1} and homogeneity of the function c is equivalent to the inequality q′′+ q ≥ 0. It is easy to

check that 4c(t, s) = 2π\ 0 |t cos ϕ + s sin ϕ|(q′′(ϕ − π/2) + q(ϕ − π/2)) dϕ = r 2π \ 0 | cos (ϕ − ϕ0)|(q′′(ϕ − π/2) + q(ϕ − π/2)) dϕ,

where t = r cos ϕ0, s = sin ϕ0, thus c(t, s) = rq(ϕ0). We obtain an explicit formula for the

density of the measure ν: ν(dϕ) = 14(q′′(ϕ − π/2) + q(ϕ − π/2))dϕ. Less smooth norms can always be approximated by ones which are smooth enough.

ExampleII.2.1. Let c(t, s) = (|s|α+ |t|α)1/αfor some α > 1. It is only a matter of laborious calculations to check that in this case

q′′(ϕ−π/2)+q(ϕ−π/2) = q′′(ϕ)+ q(ϕ) = (α −1)|cos ϕ sin ϕ|α−2(|cos ϕ|α+ |sin ϕ|α)1/α−2 for ϕ 6= π2k, k ∈ {1, 2, 3, 4}. Theorem II.2.5 states that q′′+ q is the density of the

spectral measure of the two-dimensional symmetric Cauchy random vector (X1, X2) with

characteristic function exp{−(|s|α+ |t|α)1/α}.

The next theorem (see [172], §4, 1987), apparently rather trivial, states that Lα

-norm-symmetric random vectors, α ≤ 2, play a crucial role in the theory of pseudo-isotropic vectors. It shows that if there exists a pseudo-isotropic random vector X = (X1, . . . , Xn)

with function c which cannot be expressed as an Lα-norm for some α ∈ (0, 2] then X1does

not have any positive moment. We should underline here that the problem of existence of a quasi-norm c which cannot be expressed as an Lα-norm for some α ≤ 2, but for

which there exists a non-trivial function ϕ such that ϕ(c(·)) is positive definite, is still open. Because of Theorem II.2.6 we are mainly interested in this paper in subspaces of Lα-spaces for α ≤ 2.

Theorem II.2.6 (Misiewicz [164], Th. 1). Assume that the random vector X = (X1, . . . , Xn) ∈ Rn is pseudo-isotropic and there exists ε > 0 such that E|X1|ε < ∞.

Then there exist a maximal positive numberα ∈ (0, 2] and a corresponding finite positive symmetric measure ν on the unit sphere Sn−1⊂ Rn such that

c(ξ)α= \ . . . \ Sn−1 n X ξkxk α ν(dx).

(21)

P r o o f. Denote by ν1 the distribution of the random vector X. Let p = min{ε, 2}.

Without loss of generality we can assume that E|X1|p= 1. Notice that

c(ξ)p = E|c(ξ)X 1|p= E n X ξkXk p = \ . . . \ Rn |hξ, xi|pν 1(dx).

It follows from Theorem II.1.3 that there exists a symmetric p-stable random vector Y with characteristic function exp{−c(ξ)p}. This means that the function c(ξ)p is negative

definite on Rn (see definition on page 34). Define now

α = sup{p ∈ (0, 2] : c(ξ)p is a negative definite function on Rn}.

Since a limit of negative definite functions is negative definite, it follows that c(ξ)α =

lim c(ξ)pfor p ր α is negative definite on Rn, and consequently the function exp{−c(ξ)α}

is positive definite on Rn, and therefore it is the characteristic function of some random

vector (Z1, . . . , Zn). For every t ∈ R and every ξ ∈ Rn we have

Eexpni n X ξkZk o = exp{−|t|αc(ξ)α},

which means that all one-dimensional projections of the random vector (Z1, . . . , Zn) are

symmetric α-stable, and consequently (see Corollary II.1.1) the random vector (Z1, . . .

. . . , Zn) is symmetric α-stable. Let ν be the spectral measure of (Z1, . . . , Zn); then the

characteristic function of Z can be written as Eexpni n X ξkZk o = expn− \ . . . \ Sn−1 |hξ, xi|αν(dx)o, which leads to the desired expression for c(ξ).

The next theorem was proven by D. Richards in 1985 (see [199], [200]).

Theorem II.2.7. Let X = (X1, . . . , Xn) be a pseudo-isotropic random vector on Rn with characteristic functionϕ(c(ξ)), ξ ∈ Rn. Ifϕ(c(ξ)) ∈ L

1(Rn), or if

T

∞ 0 r

n−1ϕ(r)dr <

∞, then the density function f(x) of X can be written in the following way: f (x) = 1 (2π)n ∞ \ 0 rn−1ϕ(r)Ic(xr) dr, where Ic(x) ≡ \ . . . \ c(u)=1 coshx, ui ω(du), x ∈ Rn, and ω(du) = n X k=1 (−1)k+1ukdu1. . . duk−1duk+1. . . dun.

CorollaryII.2.1. Under the assumptions of Theorem II.2.6, there exists a probability measure ν on {y ∈ Rn : c(y) = 1} such that the density function f(x) of the

pseudo-isotropic random vectorX can be written as f (x) = C ∞\ 0 rn−1ϕ(r) bν(xr) dr, where bν(x) = \ . . . \ c(y)=1

(22)

P r o o f. It was shown by Richards [199] that the restriction of the measure ω to the set {y ∈ Rn : c(y) = 1} is finite and positive. Hence, the measure ω(·)/I

c(0) is a probability

measure on {y ∈ Rn: c(y) = 1} and the formula for the density f(x) easily follows from

Theorem II.2.7 with C = Ic(0)/(2π)n.

Proposition II.2.3. (NP) Let X = (X1, . . . , Xn) and Y = (Y1, . . . , Yn) are in-dependent pseudo-isotropic random vectors with characteristic functions ϕ1(c1(ξ)) and

ϕ2(c2(ξ)) respectively. If X + Y is pseudo-isotropic with characteristic function ϕ(c(ξ))

then either there exist positive constantsk1, k2 such thatc1(ξ) = k1c2(ξ) = k2c(ξ) for all

ξ ∈ Rn, or there exist positive numbers m < M < ∞ such that for every r/s ∈ [m, M],

there exists a positive constantc(r, s) such that for every t > 0, ϕ1(rt)ϕ2(st) = ϕ(c(r, s)t).

P r o o f. (a) If there exists a positive constant k1 > 0 that c1(ξ) = k1c2(ξ) for all

ξ ∈ Rn then the characteristic function of X + Y can be written as ψ(c

2(ξ)), for ψ(t) =

ϕ1(k1t)ϕ2(t). By Proposition II.2.1 there exists a positive constant a such that c(ξ) =

ac2(ξ), thus k2= k1/a. Assume now that there is no k > 0 such that c1(ξ) = kc2(ξ) for

all ξ ∈ Rn. Consider the function q(ξ) = c

1(ξ)/c2(ξ), ξ ∈ Rn. Since c2(ξ) > 0 for every

ξ 6= 0, and both functions c1 and c2 are continuous, the function q attains its extremes

on the unit sphere Sn−1 and q(Sn−1) = [m, M ] ⊂ (0, ∞). Choose ξ ∈ Rn such that

q(ξ) = r/s ∈ [m, M]. Then ϕ1(rt) · ϕ2(st) = ϕ1  c1  ξ st c2(ξ)  · ϕ2  c2  ξ st c2(ξ)  = ϕ  sc(ξ) c2(ξ) t  , thus the statement holds with c(r, s) = sc(ξ)/c2(ξ).

II.3. Elliptically contoured vectors. The investigations of elliptically contoured distributions started in 1938 with the paper [215] of Schoenberg. This paper was devoted to random vectors invariant under isometries in Rn and in ℓ

2. Later on this concept

was generalized to the elliptically contoured random vectors, which are in fact images of vectors invariant under isometries through linear operators. In this paper we recall only some basic properties of elliptically contoured random vectors. For further information we refer the reader to [44], which treats the problem mainly from the statistical point of view, and [172], where emphasis is put on measure theory. Both papers contain rich bibliographies.

In this paper we discuss only some characterizing representations and properties of elliptically contoured random vectors, as they can be helpful in the general theory of pseudo-isotropic random vectors. Because of this, from the collection of equivalent defi-nitions of elliptically contoured random vectors we choose here the definition of pseudo-isotropic random vector with the function c specified as a norm defined by an inner product on Rn.

DefinitionII.3.1. A random vector X = (X1, . . . , Xn) is elliptically contoured if it is pseudo-isotropic with a function c : Rn → [0, ∞) defined by an inner product on Rn;

(23)

i.e. there exists a symmetric positive definite n × n-matrix ℜ such that c(ξ)2= hξ, ℜξi, ∀ξ ∈ Rn.

R e m a r k II.3.1. If ℜ = I, i.e. if c(ξ)2=Pn

k=1ξk2, then elliptically contoured random

vectors with such c are also called in the literature rotationally invariant, spherically generated or spherically contoured (see Askey [12], Box [28], Gualtierotti [80], Huang and Cambanis [95], Kelker [116], [117], Kingman [119], [120], Letac [136]).

From now on we will use the notation EC(ϕ, ℜ, n) for the distribution of an elliptically contoured random vector X = (X1, . . . , Xn) with c(ξ)2 = hξ, ℜξi and E exp{itX1} =

ϕ(t). The following lemma was proved by Crawford (see [48]) in 1977, originally for absolutely continuous distributions:

Lemma II.3.1. Let X = (X1, . . . , Xn) be elliptically contoured with distribution EC(ϕ, ℜ, n), ℜ = BTB, and let C be a non-singular n × n-matrix. If Y = B−1CX, thenY is elliptically contoured with distribution EC(ϕ, CTC, n).

As a corollary, a random vector X on Rn is elliptically contoured if and only if there

exists a non-degenerate linear operator B : Rn → Rn such that B−1X is rotationally

invariant. The next crucial result was proved in 1938 by Schoenberg (see [215], [217]). TheoremII.3.1. A random vector X = (X1, . . . , Xn) is rotationally invariant if and only if there exists a non-negative random variableΘ such that X= (Ud 1, . . . , Un)Θ, where

the random vectorU(n)= (U

1, . . . , Un) is independent of Θ and has a uniform distribution

on the unit sphereSn−1= {x ∈ Rn:Pn

k=1x2k= 1}.

P r o o f. It is enough to define Θ = kXk2and check that (U1, . . . , Un)Θ and X have

the same characteristic function.

Consider the random vector U(n) = (U

1, . . . , Un) defined in Theorem II.3.1. It is

evident that the distribution of U(n), being supported in Sn−1, cannot be absolutely

continuous with respect to the Lebesgue measure on Rn. The distribution of U(n) is

of sign-symmetric Dirichlet type with parameters (2, . . . , 2; 1, . . . , 1), i.e. the following conditions hold:

(i) (U1, . . . , Un) is a sign-symmetric random vector;

(ii)Pnk=1U2

k = 1 with probability one;

(iii) the joint density function of (U1, . . . , Un−1) is

Γ (n/2) Γ (1/2)n  1 − n−1 X k=1 u2 k −1/2 + ,

where (a)+= a whenever a ≥ 0, and (a)+= 0 otherwise.

One can show that the joint density function of the first k components (U1, . . . , Uk),

k < n, of the random vector U(n)is

Γ (n/2) Γ ((n − k)/2)Γ (1/2)k  1 − k X j=1 u2j (n−k)/2−1 + ;

(24)

Theorem II.3.2. The marginal density function of (X1, . . . , Xk), k < n, for the ro-tationally invariant random vector X = (X1, . . . , Xn) = U(n)Θ admits the following

representation: fk(x) = Γ (n/2) Γ ((n − k)/2)Γ (1/2)k ∞ \ 0 r−k1 − r−2 k X j=1 x2j (n−k)/2−1 + λ(dr),

x ∈ Rk, where λ is the distribution of Θ. If (X

1, . . . , Xn) is elliptically contoured with

representationBU(n)Θ then the k-dimensional marginal density of (X

1, . . . , Xk), k < n, is of the form fk(x) = Γ (n/2)|ℜk| −1/2 Γ ((n − k)/2)Γ (1/2)k ∞ \ 0 r−k(1 − r−2hx, ℜ−1k xi) (n−k)/2−1 + λ(dr),

x ∈ Rk, where the k ×k-matrix ℜ

k is built from the firstk rows and columns of the matrix

ℜ = BTB.

R e m a r k II.3.2. The formula for the density function of the k-dimensional projection (X1, . . . , Xk), k < n, for the elliptically contoured random vector X = BU(n)Θ can also

be written in the following way:

fk(x) = |ℜk|−1/2f (hx, ℜ−1k xi),

where f : [0, ∞) → [0, ∞) is an n−k2 -times monotonic function. More about α-times

monotonic functions can be found in the paper of Williamson [239]. From this work it is enough to learn that g is α-times monotonic if it admits a representation

g(r) =

∞\

0

(1 − ru)α−1+ dF (u),

with F a non-decreasing, non-negative function. In the case of α ∈ N the function g is α-times monotonic if and only if it is α-times differentiable and (−1)kg(k)(t) ≥ 0 for every

0 ≤ k ≤ α.

Evidently, U(n) is pseudo-isotropic, thus its one-dimensional projections are all the

same and Eexpni n X k=1 ξkUk o = E expni n X k=1 ξk2 1/2 U1 o = Γ (n/2) Γ ((n − 1)/2)Γ (1/2) 1 \ −1 cos(kξk2u)(1 − u2)(n−3)/2du ≡ Ωn(kξk2).

The function Ωn, playing an important role in the theory of pseudo-isotropic random

vectors, can also be written in the following way: Ωn(r) = 2Γ (n/2)

Γ ((n − 1)/2)Γ (1/2)

π/2

\

0

cos(r sin ϕ) cosn−2(ϕ) dϕ

= Γ  n 2  2 r n/2−1 J(n−2)/2(r),

(25)

where Jν(r) is a Bessel function, i.e. a cylindrical function of the first kind, thus it is a

solution of the differential equation (for details see e.g. [76]) d2J ν(r) dr2 + 1 r dJν(r) dr +  1 −ν 2 r2  Jν(r) = 0.

This implies that

d2 dr2Ωn(r) + n − 1 r d drΩn(r) + Ωn(r) = 0. Now, we have the following:

Theorem II.3.3 (Schoenberg [217]). If X = (X1, . . . , Xn) is an elliptically contoured random vector with representationX= BUd (n)√Θ, ℜ = BTB, then

Eexpni n X k=1 ξkXk o = ∞ \ 0 rn−1Ωn((hξ, ℜξi)1/2r) λ(dr),

whereλ is the distribution of Θ.

We can see that not every symmetric positive definite function ϕ on R with ϕ(0) = 1 has the property that ϕ(k · k2) is the characteristic function of an elliptically contoured

random vector. In 1973 Askey [12] proved the following

TheoremII.3.4. Let ϕ : [0, ∞)→ R be continuous and such that ϕ(0)=1, limt→∞ϕ(t) = 0 and (−1)kϕ(k)(t) ≥ 0 is convex for k = [n/2]. Then for every positive definite n ×

n-matrix ℜ, ϕ(ξℜξT) is the characteristic function of some elliptically contoured random

vector.

Finally, let us calculate the Richard function I2(x), x ∈ Rn (see Theorem II.2.7) for

rotationally invariant random vectors, i.e. for the function c(ξ)2=P

k=1ξk2. We have I2(x) = \ . . . \ Σn k=1u 2 k=1 coshx, ui ω(du) = 2 \ . . . \ Σn k=1u 2 k=1 coshx, ui1 − n−1 X k=1 u2k (n−3)/2 du1. . . dun−1,

where un is substituted by (1 −Pn−1k=1u2k)1/2 in the second line. We have obtained an

expression that is, up to a multiplicative constant, equal to the characteristic function of the random vector U(n), hence

I2(x) = Γ ((n − 1)/2)Γ (1/2)

Γ (n/2) Ωn(kxk2).

It now follows that if a rotationally invariant random vector X = U(n)Θ with

representa-tion EC(ϕ, I, n) has an integrable characteristic funcrepresenta-tion ϕ(kξk2), ξ ∈ Rn, then its density

function can be written as

f (x) = Γ ((n − 1)/2)Γ (1/2) Γ (n/2) ∞ \ 0 rn−1ϕ(r)Ωn(kxk2r) dr.

Let us also notice here that ϕ(r) is the characteristic function of the random variable Θ · U1.

(26)

II.4. α-symmetric random vectors

DefinitionII.4.1. A symmetric random vector X = (X1, . . . , Xn) has α-symmetric distribution, α > 0, if X is pseudo-isotropic with the function c : Rn→ [0, ∞) given by

c(ξ) = 

(P|ξk|α)1/α if 0 < α < ∞,

max{|ξ1|, . . . , |ξn|} if α = ∞.

The existence of α-symmetric random vectors, at least for α ∈ (0, 2], follows immedia-tely from the existence of symmetric stable random vectors with independent identically distributed coordinates. It turns out, however, that it is not easy to get the full cha-racterization of α-symmetric random vectors on Rn, except for the case α = 2 (which

was shown in the previous section). The main reason is the complexity of the formulas appearing in the following lemma, where we calculate the Richards representation (see Proposition II.2.7 and [199] for the proof) of the density function of an α-symmetric random vector:

LemmaII.4.1. If the distribution of an α-symmetric random vector X = (X1, . . . , Xn) is absolutely continuous with respect to the Lebesgue measure then its density function f (x) can be written as follows:

(II.4.1) f (x) = 1 (2π)n ∞ \ 0 rn−1ϕ(r)Iα(xr) dr,

whereϕ(t) = E exp{itX1}, and with the notation u′ = (u1, . . . , un), u′′ = (u1, . . . , −un)

whereun= (1 −Pn−11 |uk|α)1/α, the function Iα(x) can be expressed as

\ . . . \ {Σn−1 k=1|uk|α≤1} (coshx, u′i + coshx, u′′i)1 − n−1 X k=1 |uk|α 1/α−1 du1. . . dun−1.

P r o o f. Notice first that the characteristic function of X, ϕ(kξkα), is sign-invariant

i.e. it does not depend on the signs of the components of the vector ξ. Thus the density function f (x) has to be sign-invariant as well. Now, using the inverse Fourier transform, we obtain f (x) = 1 (2π)n \ . . . \ Rn coshx, ξiϕ(kξkα) dξ1. . . dξn = 1 (2π)n ∞ \ 0 \ . . . \ Rn−1

(coshx, ξ′i + coshx, ξ′′i)ϕ(kξkα) dξ1. . . dξn,

for ξ′ = (ξ

1, . . . , ξn) and ξ′′ = (ξ1, . . . , −ξn). Substituting now ξ1 = ru1, . . . , ξn−1 =

run−1, ξn= r(1 −Pn−1k=1|uk|α)1/α, we get the desired formula.

ExampleII.4.1. In the case α = 1, the expression for I1(x) becomes especially simple: I1(x) = \ . . . \ {Σn−1 k=1|uk|≤1} (coshx, u′i + coshx, u′′i) du 1. . . dun−1.

For n = 2 and |x| 6= |y| it can be easily calculated as I1(x, y) = 4x sin x − y sin y

(27)

and for n = 3, |x| 6= |y| 6= |z| it is equal to 8  x2cos x (x2− y2)(z2− x2)+ y2cos y (x2− y2)(y2− z2)+ z2cos z (y2− z2)(z2− x2)  .

Example II.4.2. In the case α = ∞ and n = 3, the expression for I(x, y, z) has been obtained by Misiewicz (see [163]), and is equal to

I∞(x, y, z) =

2

xyz{(−x + y + z) cos(−x + y + z) + (x − y + z) cos(x − y + z) + (x + y − z) cos(x + y − z) − (x + y + z) cos(x + y + z)}, whenever xyz(−x + y + z)(x − y + z)(x + y − z)(x + y + z) 6= 0.

Let P+ denote the set of probability measures on [0, ∞). For every bounded Borel

function f ∈ L∞[0, ∞) and λ ∈ P +, let (f ⊙ λ)(t) := ∞ \ 0 f (rt) λ(dr),

the scale mixture of f with respect to the measure λ. It is easy to see that (f ⊙ λ) ⊙ ν = (f ⊙ ν) ⊙ λ. Further, for A ⊂ L∞[0, ∞), let

A ⊙ λ = {f ⊙ λ : f ∈ A}, A ⊙ P+= {f ⊙ λ : f ∈ A, λ ∈ P+}.

With a slight change of notation of [38], we denote by Φn(α), α > 0, the set of all

functions ϕ : [0, ∞) → R such that ϕ(kξkα) is a characteristic function (of an α-symmetric

random vector) on Rn. The set Φ

n(α) coincides with the set {bµ : µ ∈ M(c, n)} for the

function c(ξ) = kξkα, thus it follows from Theorem II.2.4 that

(P1) ∀n ∈ N ∀α > 0, Φn(α) is a convex, closed subset of the set of all real characteristic

functions on Rn.

If ϕ ∈ Φn(α) and λ ∈ P+, then ϕ ⊙ λ(kξkα) is the characteristic function of the random

vector XΘ, where X = (X1, . . . , Xn) and Θ are independent, ϕ(kξkα) is the characteristic

function of X, and λ is the probability distribution of the random variable Θ. This implies that

(P2) Φn(α) ⊙ P+= Φn(α).

It is clear that the marginals of α-symmetric distributions are α-symmetric as well, hence (P3) Φn(α) ⊂ Φm(α) if n ≥ m.

PropositionII.4.1. For every n ≥ 3 and every α > 0, (P4) e−tβ ∈ Φn(α) ⇔ β ≤ α ≤ 2.

History of the proof. Notice first that e−tβ

∈ Φn(α) if and only if exp{−kξkβα} is

positive definite on Rn, if and only if ℓn

α embeds isometrically into some Lβ-space. The

sufficiency was already known to P. L´evy [138]. The proof can be easily obtained from the construction presented in Example II.1.3 and it does not depend on the dimension of the space ℓn

α.

The proof of necessity has a long history going back to the first investigations of symmetric stable random vectors [138], and the first Schoenberg problem [217] (see also

(28)

Introduction). It is easy to see (and was already known to P. L´evy and Schoenberg) that β must be less than or equal to 2. In 1963 Herz [92] proved that if 1 < β < 2 and ℓn

αembeds

isometrically into some Lβ-space then β ≤ α ≤ β(β − 1)−1. In 1973 Witsenhausen [240]

proved that if α > 2.7, n ≥ 3, then ℓn

α does not embed isometrically into any L1-space.

In 1976 Dor [52] (see also [29]) proved that if α, β ∈ [1, ∞) and ℓn

α embeds isometrically

into some Lβ-space then 1 ≤ β ≤ α ≤ 2. In 1991 Koldobsky [121] proved that if α > 2,

and if n ≥ 3 then ℓn

α does not embed isometrically into any Lβ-space, β ≤ 2. Note that

the result of Koldobsky solves finally, after 53 years, the first Schoenberg question. And, in 1995, Grz¸a´slewicz and Misiewicz [78] noticed that the previous considerations do not include all the cases when α < 1 or β < 1. They proved that if 0 < α < β ≤ 2 then ℓ2

α

does not embed isometrically into any Lβ-space.

In the case α ≥ 1, n = 2, the existence of α-symmetric random vectors can be easily obtained from Theorem II.2.4 stating that for every α ≥ 1 the function

exp{−(|ξ1|α+ |ξ2|α)1/α}

is positive definite on R2, thus it defines an α-symmetric 1-stable random vector (X 1, X2).

In Theorem 2.1 of [52] Dor proved that if 1<β<2<α then ℓ2

αdoes not embed isometrically

into any Lβ-space. Combining these facts with the result of Koldobsky and other results

described in the history of the proof of necessity in Proposition II.4.1 we have (P5) e−tβ ∈ Φ

2(α) ⇔ {0 < β ≤ 1, α > 2, β ≤ α ≤ 2}

A complete characterization of the classes Φn(α) is known only for the following cases

(P6)–(P9), listed below.

(P6) Φn(2) = Ωn⊙ P+,

where Ωn(kξk2), ξ ∈ Rn, is the characteristic function of the random vector U =

(U1, . . . , Un) having a uniform distribution on the unit sphere Sn−1 = {x ∈ Rn : kxk2

= 1} (details are given in Section II.3).

(P7) Φn(1) = Ωn⊙ λn⊙ P+,

where λn is the distribution of the random variable Θ−1/2, Θ having the density function

hn(t) =

Γ (n/2) π1/2Γ ((n − 1)/2)t

−n/2

(t − 1)+(n−3)/2.

This fact was proven by Cambanis, Keener and Simons (see [38]) in 1983. The proof was based on the formula for I1(x) given in Example II.4.1, and also on the following integral

formula: π/2 \ 0 f  x2 sin t+ y2 cos t  dt = π/2 \ 0 f  (|x| + |y|)2 sin t  dt,

which holds for every measurable function f for which one of the above integrals makes sense.

(P8) Φn(∞) = {1} if n ≥ 3.

This result was proven by J. Misiewicz in 1989 (see [163]) and the proof was based on the formula for I∞(x) given in Example II.4.2. It was shown that every density function given

Cytaty

Powiązane dokumenty

Bari, Amerykanka, jak o wolon- tariuszka organizacji W orld Teach przybyła do M ińska M azowiec­ kiego, aby uczyć licealistów języka angielskiego, w reportażu

52 Karty Praw Podstawowych Unii Europejskiej z Nicei z 10 grudnia 2000 r., w którym zawarte jest odesłanie do Europej- skiej Konwencji Praw Człowieka w zakresie znaczenia i

nych informacji na swój temat nadawca stara się wpływać na wyobrażenia odbiorców; realizujące funkcję autoprezentacyjną podpisy rozpoznać można m.in.. po

Specyfika polskich organizacji pozarządowych ma duże znaczenie dla określenia ich udziału w realizacji założeń Strategii Lizbońskiej 17. w rejestrze REGON zarejestrowanych było

For instance, taking U to be the family of nonnegative convex functions on E results in the usual convex stochastic ordering, or considering U to be the family of the exponents

The paper shows existence of algebraic forms of solutions for linear state equa- tion for three control vectors with components: exponential, sinusoidal with dif- ferent

Necessary and sufficient conditions are given for multidimensional p - stable limit theorems (i.e. theorems on convergence of normal- ized partial sums S n /b n of a stationary

Systems of linear