• Nie Znaleziono Wyników

Generating data with prescribed power spectral density

N/A
N/A
Protected

Academic year: 2021

Share "Generating data with prescribed power spectral density"

Copied!
7
0
0

Pełen tekst

(1)

Generating Data With Prescribed Power

Spectral Density

Piet M. T. Broersen and Stijn de Waele

Abstract—Data generation is straightforward if the parame-ters of a time series model define the prescribed spectral density or covariance function. Otherwise, a time series model has to be determined. An arbitrary prescribed spectral density will be ap-proximated by a finite number of equidistant samples in the fre-quency domain. This approximation becomes accurate by taking more and more samples. Those samples can be inversely Fourier transformed into a covariance function of finite length. The covari-ance in turn is used to compute a long autoregressive (AR) model with the Yule–Walker relations. Data can be generated with this long AR model. The long AR model can also be used to estimate time series models of different types to search for a parsimonious model that attains the required accuracy with less parameters. It is possible to derive objective rules to choose a preferred type with a minimal order for the generating time series model. That order will generally depend on the number of observations to be gener-ated. The quality criterion for the generating time series model is that the spectrum estimated from the generated number of obser-vations cannot be distinguished from the prescribed spectrum.

Index Terms—1 noise, ARMA process, linear filtering, order selection, spectral analysis, time series models.

I. INTRODUCTION

T

HE ACCURACY of many experiments is limited by the presence of measurement noise in the observations. It will certainly be a problem in sensitive satellite measurement data [1]. Often, a good characterization of the noise spectrum is known from previous experiments or from a detailed physical description of the sensor and its environment. In such circum-stances, it is possible to test the signal processing sequence in advance by using a noise realization of a stochastic process that has the known spectral density. The purpose may be to verify that the proposed signal processing allows for the desired accu-racy. One specific application is the new gravity field and ocean circulation explorer mission, to be launched in 2006 [1]. This mission aims at the development of an improved model of the earth’s gravity field. The presence of colored observation noise in a huge number of observations leads to a difficult numerical regression problem, demanding a weighted least squares solu-tion. The weighting matrix is the inverse of the extremely large covariance matrix of the noise. The power spectral density of the colored noise is concentrated in the low frequency part. A time series description of the known noise spectral density gives the possibility to generate noise realizations. The same time series model can be used to design an inverse filter for

Manuscript received June 15, 2002; revised April 16, 2003.

The authors are with the Department of Applied Physics, Delft University of Technology, Delft, The Netherlands (e-mail: broersen@tn.tudelft.nl).

Digital Object Identifier 10.1109/TIM.2003.814824

the colored noise; for real-time implementation, it is important that the filter has a low order. The filtered noise becomes uncorrelated, which is an important advantage in the remaining regression problem [1]. Other examples for the generation of data emerge in turbulence, where the spectrum is proportional to and in other physical problems, e.g., with noise.

Time series modeling is a parametric description of spectral densities. The three model types that can be used for time se-ries models are autoregressive (AR), moving average (MA), and combined ARMA models. All stationary stochastic processes can be characterized by AR or by MA models of infinite order [2]. AR models are more suitable for spectral peaks; MA models are better for valleys. In practice, most processes can be de-scribed adequately by AR( ), MA( ), or ARMA( ) processes of finite orders and/or [3].

Generating stationary data for a given ARMA process re-quires some care. Using zeros or any arbitrary values as initial conditions, generated signals become stationary after the du-ration of the impulse response. Unfortunately, the impulse re-sponse is only finite for a MA process. It is infinitely long for AR and ARMA processes. Therefore, this primitive method of data generation without care for initial conditions can only be exact for MA processes. It is at best an approximation for AR or ARMA processes. A better method is found by separating the generation into an AR and a MA part. Consider the joint prob-ability density function of a finite number of AR observations with a prescribed correlation function. Data can be generated that obey that prescribed AR correlation. A realization of the ARMA process is obtained by filtering AR data with the MA polynomial.

So far, it has been assumed that the spectral requirements are already formulated as a time series model. However, prescrip-tions for data can also be given in terms of correlation funcprescrip-tions or power spectral densities. In principle, observations can be considered as a single period of a multisine signal, that can be given the prescribed spectral density at maximally

dis-crete frequencies . Multisine

signals are particularly appropriate as input signals for system identification at a limited number of frequencies, but not for ap-plications like noise. The discrete spectrum would require a new signal specification for each value of , at different fre-quencies. It would only have the desired spectral properties for sample sizes that are multiples of , if care is taken with the transients of the initial conditions. Therefore, the treatment of these types of prescriptions for data also preferably uses time series models, which produce stationary stochastic data with a continuous spectral density that can be generated for different values of with the same model. A solution has been given to

(2)

fit an AR, MA, or an ARMA model to a finite number of sam-pled spectral values [4]. It uses the inverse Fourier transform of the spectrum as the desired covariance function. An AR model of high order is calculated then from those covariances with the Yule–Walker relations [2]. The AR model is also a basis to com-pute other time series models. Models can often be simplified by using the long AR model as input for a reduced statistics anal-ysis [4].

This paper describes the joint probability density function (pdf) of an arbitrary number of observations of an AR( ) process. That is the basis for the generation of data. Prescribed spectral densities can be based on estimated or on exact knowl-edge. It is shown that order selection criteria can be adapted to the reliability or the accuracy of the spectral prescriptions. Furthermore, it is shown that an adequate generating process for the prescribed spectral density may depend on the number of observations to be generated.

II. ARMA MODELS

An ARMA( ) process is defined as [2]

(1) where represents a series of independent, identically dis-tributed, zero mean white noise observations. The process is purely AR for 0 and MA for 0. The ARMA process can also be written with polynomials of AR and of MA parameters as

(2)

with and

. Models may have estimated polynomials of arbitrary orders, not necessarily equal to the true and . Processes and models are stationary if the poles, the estimated roots of , are inside the unit circle; they are invertible if the zeros, the roots of , are inside the unit circle. The spectral density is given by [2]

(3) This shows that the parameters of a time series model, together with the variance of the exciting white noise, determine the spectral density. The covariance function is the inverse integral Fourier transform of (3). It can be approximated by an inverse discrete Fourier transform of a sampled . However, it is also possible to obtain an exact covariance function for a given ARMA( ) process by direct computations in the time domain [2]. Therefore, the parameters of a time series model are a good representation for the characteristics of a process, exact in both the time and in the frequency domain.

The quality of estimated ARMA( ) models with polyno-mials indicated by is measured with the model error ME [5]. This measure can be used in simulations where an omniscient experimenter knows the true process ARMA( ) parameters that generated the observations. Likewise, it can be used to eval-uate the difference between a prescribed spectral density, ex-pressed as an ARMA process and the approximating ARMA

spectrum. The ME is a scaled transformation of the expectation of the squared error of prediction PE

(4)

The model error is asymptotically equal to times the spectral distortion SD, defined as

(5) where the denotes an approximating spectrum. In this way, the ME can also be defined for spectral densities that are not given by time series models. The expectation of ME is for unbiased models independent of and of the variance of the signal. Only true and estimated parameter values are required to compute the ME with (4). The asymptotical expectation of the ME of unbi-ased efficiently estimated ARMA( ) models that contain at least all truly nonzero parameters is equal to the number of estimated parameters.

III. PROBABILITYDENSITY OFAR PROCESS

The normal distribution of a variable with mean and vari-ance is given by

(6)

The joint pdf of observations , where is a jointly nor-mally distributed vector stochastic variable

(7) with

(8) is given by

(9) This is a general expression that can be applied to AR, MA, or ARMA processes by expressing the Toeplitz matrix in the parameters. It is often preferred in the literature and it has computational advantages to use as much as possible the white noise innovations with diagonal covariance function in the prob-ability density and likelihood functions [2]. The observations are then considered as filtered innovations. The probability den-sity function of observations of an AR( ) process with poly-nomial , with normally distributed zero mean innovations

can also be written as a conditional product of the last observations, given the first

(3)

The first part of the right-hand side for the last observa-tions is given by [2, p. 347]

(11)

The second part describing the first observations is also known [2, p. 350]. However, a recursive form of that expression is a better starting point for the generation of data. By using condi-tional densities, it follows that:

(12) Generally, for all observations , with

(13) With those intermediate results for , (12) can be written as

(14) Elements of the Levinson–Durbin algorithm [6] can be used to evaluate this expression. This algorithm recursively computes parameter sets of increasing order from the AR correla-tion funccorrela-tion, as well as variance expressions. The last param-eter of an AR model of order is always equal to the reflection coefficient . The algorithm starts with

(15) where denotes the covariance of the AR( ) process at lag

. The recursion for is given by [7, p. 169]

(16) At the final stage of the recursion, the polynomial is equal to in (2). The polynomial is the best linear predictor of order , or the best linear combination of previous observations to predict the next observation . Hence, the conditional probability density of , conditional on

previous observations has as expectation

with the variance . Using this in the conditional expecta-tions (13) gives with (6)

(17)

The variance of the AR( ) process is given by

(18)

The recursive variance relation for intermediate AR orders can be expressed with increasing or decreasing index, which gives

(19)

Now, the conditional density (17) becomes

(20) The substitution of (20) in (14) and that result together with (11) in (10) is merely an exercise with as result sums of products. However, as all ingredients for the probability density function are given, the derivation is sufficient to be used in generating data for an AR( ) process.

IV. DATAGENERATION

Data generation is strictly separated in MA and AR gener-ation. For ARMA processes, it is essential that the AR part is used first, because the derivation of Section III used white noise as input, e.g., in (11). Remind that the covariances in (16) are covariances belonging to the AR polynomial only. The principle is for ARMA processes given by

(21) which together combines to the result of (2)

(22)

AR Data

The purpose is to generate observations with the proba-bility density function like in (10). The first observation has the only requirement that it has expectation zero and variance or . That is found with a random number generator with normal or Gaussian density and the pre-scribed variance. The second observation follows from (15) and (17) as a normally distributed random variable with mean and variance . The third observation uses (16) and (17) with mean and variance . The first obser-vations are generated in this way. According to (11), all further observations can be generated with the regime

(4)

This is a filter procedure with a Gaussian random white noise signal as input signal and the first observations as initial condi-tions. For AR processes, the initial observations and the filter results with (23) are together the observations.

MA or ARMA( ) Data

For MA data, the input signal for the second equation in (21) is a Gaussian white noise . For ARMA processes, the input to the MA filter will be the output of the AR filter (23). The data are computed with

(24) The first data require negative input index. Therefore, to gen-erate MA or ARMA observations with this method, the input sequence has to be long. The first points of the filter output are disregarded.

V. FROMSPECTRUM TOARMA MODEL

Data are always generated with the time series model (1), using white noise of a random generator as input. Therefore, the prescribed spectral density will at the end be defined by the parameters of an ARMA model. If the parameters are already given as characterization for the desired spectral density, they can be used immediately. However, the prescribed spectrum can also be given as an exact continuous function

. Without loss of generality, the sampling time is taken to be 1. The first step is to determine an approximation for the co-variance function. The coco-variance function is the inverse integral Fourier transform of the continuous function

(25)

The length of may be infinite. By defining ) periodic with period 2 , the integral in (25) can also be taken from 0 to 2 . As in (25) is continuous, the usual discrete time inverse Fourier computer transformations can only be approximations for , unless specific assumptions can be made about the in-terpolated values between the discrete frequency values that are used in the computation. Different covariance functions can belong to the same finite sampled discrete version of . A unique finite covariance may be found by considering the process as a MA process [6], which has nonzero values for only a finite range of shifts . Also, considering a finite part of the covariance function as a Fourier series gives an unique inter-polation in a continuous function [2, p. 578] with the FFT algorithm. However, the true covariance function of an AR or an ARMA process is infinitely wide, albeit that it will be damping out. Approximating the integral of in (25) by a summa-tion is equivalent with sampling in the frequency domain. This causes the equivalent of aliasing of the covariance function, in the time domain. Therefore, some care in the covariance cal-culation with inverse Fourier transform is required. Taking the mid-range value of the integrand as representative for an

integration interval , a correlation function can be ap-proximated from discrete values as

(26)

For small , this summation is almost equal to the usual sum-mation result in (26) because has the limiting value 1 for small . A small value for is obtained if many equidis-tant points are used. The derivation of (26) for is preferred above the usual inverse FFT if only a few samples of a given are used in the inverse transformation. It is also possible that the prescribed spectral density is specified for only a small number of frequencies. The integration in (26) defines explicitly what is transformed. However, if and are given, it is always more accurate to use the standard time se-ries formulae for the calculation of the covariance function for a finite number of shifts [2]. Integration with (26) or any other approximation of the integral can only provide exact results for the covariance function if the true process is MA.

For one-sided prescribed spectra defined for between 0 and , the first step is to sample at intervals , with

. The samples are

made to a symmetric spectrum by adding the samples for in reversed order. Taking gives a con-venient length for the FFT. After the inverse Fourier transform of the elongated sampled spectrum, the first , ( ) points describe the desired covariance function. should be chosen high enough, such that the values of the covariance are effec-tively zero for lags beyond . If that turns out to be impossible for a prescribed spectrum because the covariance function is too elongated, should be chosen at least as high as the number of observations that has to be generated. The larger , the better the estimated covariance. This is an operational advice for the length and it should always be validated that the correlation is negligible beyond , or that . The first lags of the covariance function are transformed to an AR( ) model with the Yule–Walker relations [6]. If the prescribed is exact, taking and greater will generally give a better approxima-tion; the improvement may be small.

(5)

that has to be generated. A natural choice is to allow distor-tions that are much smaller than the expected statistical uncer-tainty if the generated data are analyzed. The distortion can be quantified as the bias of the truncated model in the ME measure (4) or in the spectral distortion (5). If the bias contribution of the truncation is smaller than the variance contribution to the ME of estimating a single parameter, the higher order parameters can be omitted without the possibility that the omission can be de-tected from generated observations. This is no tight boundary but it is a sensible compromise. This gives an allowed ME of 1 for the truncated AR( ) model with respect to the true AR( ) process for a given . With (4), this gives

(27) This is the lowest order , for which the AR( ) model gives

(28) In practice, the AR( ) process in (28) is replaced by the finite order AR( ) process, with chosen at least so high that the difference between the AR( ) and the AR( ) model has a ME difference smaller than say 0.1 for the given . This can be verified in practice and can be chosen higher until this con-dition is met. Simulations will indicate that orders and are often much smaller than . It is possible to use the AR( ) model as input for a reduced statistics algorithm [4] that com-putes AR, MA, and ARMA models of various orders. Moreover, it will select the model type and the model order that are closest to the given AR( ) process. Furthermore, the MA model with the lowest order and the ARMA( ) model with lowest order can be found that have a ME value less than 1 with re-spect to the AR( ) model, for a given value of . In this way, the minimum number of parameters, or or , can be determined that is required to meet the accuracy demands. If the true process would be a finite order MA( ) or ARMA( ) process, this finite order can easily be detected with the reduced statistics estimator. If this MA or ARMA model requires less parameters than the AR( ) model described before, it may be worthwhile to use the most parsimonious time series model to generate or to filter data. However, it should be realized that the MA and ARMA models have long transients in inverse filtering. Using lower order models will give a less accurate approxima-tion of the prescribed spectrum.

It is also possible to define a prescribed spectrum with an estimated periodogram, instead of with a continuous function . In those cases, special care is required, because estimated periodograms contain a lot of spurious details. If obser-vations are transformed with the FFT and the absolute values are squared, the resulting function is the raw periodogram, say . The inverse Fourier transform of should be the co-variance function, but it is an aliased version which has as ex-pectation [2]

(29)

with . Using tapers and windows will cause

more distortions of the covariance. The use of periodograms cannot be advised. Periodograms as a spectral prescription can be of the same accuracy as some very high-order estimated AR model, if the triangular bias in (29) is negligible [3]. However, a much better solution exists. If one wants to use measured data as prototype for the spectrum, it is advised to estimate the time series model for those data with ARMAsel [8]. This automatic computer program finds the best spectral time series model for given data with robust algorithms and criteria, which select only statistically significant details.

Another possibility is that the information in is not exact, at least not for all frequencies. For colored satellite noise, the spectral density at some discrete, not equidistant, frequencies has been given and the continuous function is determined with interpolation [1]. The proposed method to deal with those circumstances is largely the same as dealing with exact , but with a completely different critical value for the ME. Variation of the interpolation methods and spline solutions produces different continuous spectra, different covariances, and, hence, different time series models. The difference be-tween those interpolation solutions can be expressed in the ME (4) for a given value of as the number of data to be generated. If no information about the best interpolation is available, all solutions are equally well obeying the prescriptions. The largest difference found in the ME between the different interpolation results can be used as a critical ME value, instead of the critical value ME that was derived before for a given exact true . In this way, lower order models may become accurate enough if the prescription is less precise.

The practical criterion for data generation is always: generate observations with a time series spectrum that cannot be dis-tinguished from the true given spectrum. It might be possible that the model is good for the generation of observations, but the difference can, perhaps, be seen if the same model was used to generate 100 or more observations. Therefore, the gener-ating model may depend on .

VI. SIMULATIONS

To show the possibilities of the data generation, it is applied to an example where broken exponents determine the desired spec-tral shape. It is some prototype spectrum for turbulence data. The spectrum consists of two declining slopes after a constant at the lowest frequencies. The first slope descends at a rate of from 0.01 and the second slope of starts at 0.1

(30)

(6)

Fig. 1. Turbulence prototype spectrum and approximating AR(1000) model forL = 2 0 .

TABLE I

LOWESTALLOWEDTRUNCATEDAR ORDERM WITH THEME LESS THAN1FOR THEGENERATION OFN TURBULENCEOBSERVATIONS,AS A

FUNCTION OFN; L = 2 ANDK = 1000

values for , the length of the inverse Fourier transform, has hardly any influence on the minimum order . The values for obtained with and 500 are equal to those in Table I, except the last one that becomes 141. This demonstrates that this turbulence covariance function is already damped out sufficiently at lag 500. The sampling distance in the frequency domain hardly influences the estimated covariance function and the AR parameters if the length of the AR model is greater than 500.

An interesting spectral density in physics is the spectrum in Fig. 2. The normalized true spectrum and an approximation are shown. Both have the same sum of the total power in the normalization. Due to the singularity at , it is difficult or even impossible to generate data that obey this spectrum. The length of the approximating covariance function is strongly in-fluenced by the value of . Particularly, the approximating finite value of the spectrum taken for and the choice for , de-termining the smallest nonzero sampled frequency at

or , have an enormous impact on the length of the correlation. On the other hand, it is impossible to recognize or to verify the shape from observations at frequencies lower than say . This is visible in Fig. 2 because the ap-proximating spectrum becomes inaccurate for less than say 0.0001 . In approximating this true spectrum, a choice had to be made for , because the true value of the spectrum becomes infinite and leads to numerical problems. Therefore, some extrapolation from the first two nonzero sampled values for have been used. Those are at the frequencies and . A parabola through the spectrum of those two points,

Fig. 2. 1=f prototype spectrum and approximating AR(4096) model for L =

K = 4096.

TABLE II

LOWESTALLOWEDTRUNCATEDAR ORDERM WITH THEME LESSTHAN1 FOR THEGENERATION OFN FROM100TO100 000 OBSERVATIONS, AS AFUNCTION OFLANDK,FOR1=fASPRESCRIBEDSPECTRAL DENSITY.N IS THENUMBER FORWHICH THEAR(K)AND THEAR(K=2)

MODELHAVE0.1ASTHEIRME DIFFERENCE

with the additional demand that the derivative of the spectrum

at equals 0 yields: .

Table II gives some results for the prescribed spectrum. The value for depends much more on than on because the length of the approximating covariance function depends strongly on . If is chosen long enough, the actual value of has almost no influence. The necessary AR orders are much higher than for the turbulence example. Despite that, the ac-curacy at very low frequencies is still not good. The acac-curacy of AR models of different orders is shown in Fig. 3. The low order models in Fig. 3 are still reasonable for higher frequencies. AR(10) is reasonable for , AR(100) is reasonable for , and AR(1000) is reasonable for . The full length AR(4096) model is shown in Fig. 2. This demon-strates that it is possible to find a satisfactory time series model for any , by selecting at least . However, in cases with a singularity at , the required minimum model order is rather high. It can be advised to always compare visually the truncated AR spectra with the prescription, like in Fig. 3. The truncation value of in Table II is a minimum; using the full length gives always a better approximation.

(7)

Fig. 3. 1=f prototype spectrum and three AR approximations for

L = K = 4096.

Fig. 4. 1=f prototype spectrum, the AR(63) approximation for L = K =

2 and the ARMAsel spectrum selected from 1000 observations that had been

generated with the AR(63) process.

deals with the problem that the estimated covariance will al-ways depend on the arbitrary length . That determines the sam-pling distance in the frequency domain and, therefore, the lowest nonzero frequency, which is taken into account in the transfor-mation of the true spectrum. Taking gives some guar-antee that the differences between the generating process and the prescribed spectrum are not noticeable from observations.

This is demonstrated in Fig. 4 where the ARMA(5,4) spec-trum is shown that was selected with the ARMAsel algorithm [8] from 1000 observations that were generated with the AR(63) model obtained with . The resulting ARMAsel

spec-trum is good for , reasonable for and

poor for still lower frequencies. It closely follows the AR(63) spectrum that generated the observations. Taking a higher AR

order for the generating process gives ARMAsel results that are closer to the spectrum for a larger part of the frequency range.

VII. CONCLUSION

Data with a prescribed spectral density can be generated in a simple way with time series models. The first step is to deter-mine a finite-order AR model that has a spectrum close enough to the prescribed spectrum. Several tools are available to facil-itate the search for a parsimonious time series model of other types. Use of higher orders always improves the accuracy, un-less the spectral prescriptions were specified as a finite-order time series model.

By using an exact description of the probability density func-tion of autoregressive observations as a product of condi-tional probabilities, it is possible to efficiently generate autore-gressive data. The generation of MA or ARMA data is solved with a simple filter operation.

REFERENCES

[1] R. Klees, P. Ditmar, and P. M. T. Broersen, “How to handle colored observation noise in large least-squares problems?,” J. Geodesy, vol. 76, pp. 629–640, 2003.

[2] M. B. Priestley, Spectral Analysis and Time Series. London, U.K.: Academic, 1981.

[3] P. M. T. Broersen, “Automatic spectral analysis with time series models,” IEEE Trans. Instrum. Meas., vol. 51, pp. 211–216, Apr. 2002. [4] P. M. T. Broersen and S. de Waele, “Selection of order and type of time series models from reduced statistics,” in Proc. IEEE/IMTC Conf., An-chorage, AK, May 2002, pp. 1309–1314.

[5] P. M. T. Broersen, “The quality of models for ARMA processes,” IEEE Trans. Signal Processing, vol. 46, pp. 1749–1752, June 1998. [6] S. M. Kay and S. L. Marple, “Spectrum analysis—A modern

perspec-tive,” Proc. IEEE, vol. 69, pp. 1380–1419, Nov. 1981.

[7] P. J. Brockwell and R. A. Davis, Time Series: Theory and Methods. New York: Springer-Verlag, 1987.

[8] P. M. T. Broersen. ARMASA Toolbox. [Online]. Available: http://www.tn.tudelft.nl/mmr/downloads.

Piet M. T. Broersen was born in Zijdewind, The

Netherlands, in 1944. He received the M.Sc. degree in applied physics and the Ph.D. degree from the Delft University of Technology (DUT), Delft, The Netherlands, in 1968 and 1976, respectively.

He is currently with the Department of Applied Physics, DUT. His main research interest is in auto-matic identification on statistical grounds. He found a solution for the selection of order and type of time series models for stochastic processes and the appli-cation to spectral analysis, model building, and fea-ture extraction.

Stijn de Waele was born in Eindhoven, The

Netherlands, in 1973. He received the M.Sc. degree in applied physics and the Ph.D. degree, with a thesis entitled “Automatic model inference from finite time observations of stationary stochastic processes,” from the Delft University of Technology, Delft, The Netherlands, in 1998 and 2003, respectively.

Cytaty

Powiązane dokumenty

C’est ainsi que la traduction des relations entre les Acadiens et les Québécois présentes dans le recueil Brunante peut être considérée comme une sorte de métaphore renvoyant

Niewystępowanie korelacji między treścią stosunku prawnego zacho­ dzącego między pracownikiem a ZUS, którego treść wypełniają uprawnie­ nia pracownika do

Spectra can be estimated until frequencies higher than 100 times the mean data rate, as long as they can be repre- sented well by a low order AR model in the discrete-time

Immanentyzm, gdy idzie 0 określenie sensu życia, prowadzi do relatywizmu zasadniczo nieprze- zwyciężalnego, zaś sam sens życia jawi się jako subiektywny projekt, którego

K atastrofizm Witkacego można wszakże wyjaśnić odmiennie niż Małgorzata Szpakowska, odtwarzając np. genealogię i stru k tu rę jego poglądów w ram ach dyscypliny

Among the seven hundred thousand killed in just seven years, it is the Ghost Wolf and Snowdrift, the named outlaw wolves, the last of their kind, that live on and grab our attention,

W glosowanym orzeczeniu przyjęto, że z uwagi na to, że postanowie- nie szefa KAS o zastosowaniu „krótkiej” 72-godzinowej blokady było uza- sadnione posiadaniem informacji,

SUMMARY OF THE IDENTIFICATION METHOD The presented identification method is divided into two steps: (a) estimate finer Markov parameters by solving the nuclear norm