• Nie Znaleziono Wyników

A variance reduction technique for identification in dynamic networks

N/A
N/A
Protected

Academic year: 2021

Share "A variance reduction technique for identification in dynamic networks"

Copied!
7
0
0

Pełen tekst

(1)

Delft University of Technology

A variance reduction technique for identification in dynamic networks

Gunes, Bilal; Dankers, Arne; Van Den Hof, Paul M J DOI

10.3182/20140824-6-ZA-1003.01495 Publication date

2014

Document Version Final published version Published in

IFAC Proceedings Volume

Citation (APA)

Gunes, B., Dankers, A., & Van Den Hof, P. M. J. (2014). A variance reduction technique for identification in dynamic networks. IFAC Proceedings Volume, 47(3), 2842-2847. https://doi.org/10.3182/20140824-6-ZA-1003.01495

Important note

To cite this publication, please use the final published version (if applicable). Please check the document version above.

Copyright

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy

Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.

This work is downloaded from Delft University of Technology.

(2)

A variance reduction technique for

identification in dynamic networks

Bilal Gunes∗Arne Dankers∗Paul M.J. Van den Hof∗∗

Delft Center for Systems and Control, Delft University of Technology,

The Netherlands (e-mail:{bgunes@student., a.g.dankers@}tudelft.nl).

∗∗Department of Electrical Engineering, Eindhoven University of

Technology, The Netherlands (e-mail: p.m.j.vandenhof@tue.nl).

Abstract: With advancing technology, systems are becoming increasingly interconnected and form more complex networks. Additionally, more measurements are available from systems due to cheaper sensors. Hence there is a need for identification methods specifically designed for networks. For dynamic networks with known interconnection structures, several methods have been proposed for obtaining consistent estimates. We suppose that the internal variables in the network are measured with noise, but that there are external reference signals present in the network that are known exactly. A method that is able to deal with this situation is the two stage method, which solves several open loop identification problems sequentially. In this paper it is shown that solving the problems simultaneously leads to estimates with lower variance. Keywords: System identification, dynamic networks, linear systems, variance

1. INTRODUCTION

A network is a set of subsystems (or modules) embedded according to an interconnection structure (Van den Hof et al. (2013)). Dynamic networks are becoming increas-ingly complex in engineering. In addition, the ability to take measurements using sensors is also increasing. Thus the identification in networks problem is becoming in-creasingly important. It is advantageous to address these identification problems as explicit network identification problems, because networks exhibit phenomena which do not appear in classic open and closed-loop systems. The quality of an estimate can be assessed by determining if it is consistent and what its variance is. Consistent models with low variance are in demand. The variance of the parameters of the estimate define the confidence regions. The prediction error method provides tools to assess them.

Consistent estimates of a module embedded in a network can be obtained using several methods presented in the literature in Van den Hof et al. (2013). However, when the internal variables are measured with sensor noise, only the two-stage method (Van den Hof et al. (2013)) still results in consistent estimates. To use this method, there must be external variables present, which are known exactly. Examples of such signals could be reference signals in a control loop. In the current literature limited analysis is performed on the variance of network prediction error method estimates. What are the variance expressions? And can the variance of the two-stage method be reduced in a smart way?

? The work of Arne Dankers is supported in part by the National Science and Engineering Research Council (NSERC) of Canada.

In Wahlberg et al. (2009) a method is proposed to reduce the variance of estimates in a cascade. It is shown that their alternative objective leads to a reduction of the variance of the estimate of the upstream transfer function in the cascade. The question arises how this variance reduction technique can be exploited for identification of a target module embedded in a dynamic network. In this paper it is shown that the results of Wahlberg et al. (2009) for cascade systems can be extended to directed acyclic graphs. Additionally, it is shown that not only is the upstream module estimated with lower variance, but so is the downstream module. Furthermore, it is shown that the two stage method effectively transforms a network into a directed acyclic graph. Consequently, we combine these three results to obtain estimates of modules embedded in a network with lower variance than the two stage method. The mechanism by which the new method achieves lower variance is by simultaneously minimizing the set of prediction errors that are sequentially minimized in the two-stage method.

The background material is presented in section 2. The proposed method and its consistency properties are pre-sented in Section 3. Its variance is treated in Section 4, and compared with the two-stage method in Sections 5, 6. Simulation results are in Section 7. and compared with the two-stage method in Sections 5, 6.

2. BACKGROUND 2.1 Dynamic networks

A dynamic network is built up of L elements related to L measured scalar internal variables wj, j = 1, . . . L. Every

(3)

wj(t) =

X

j∈Nj

Gjk(q)wk(t) + rj(t) (1)

• With Gjka proper rational transfer function;

• with Nj the set of indices of internal variables with

direct causal connection to wj. k ∈ Nj iff Gjk6= 0.

• with rj an external variable that can possibly be

ma-nipulated by the user;

• with q−1 the delay operator (i.e. q−1u(t) = u(t − 1))

The internal variables can be expressed as:     w1 w2 .. . wL     =      0 G12 . . . G1L G21 0 . .. G2L .. . . .. . .. ... GL1 GL2 . . . 0          w1 w2 .. . wL     +     r1 r2 .. . rL     (2a) = G(q)w + r(t) (2b) = (I − G(q))−1r(t) (2c)

Where it is assumed that (I −G)−1exists. Define S = (I − G)−1. Some ri may not be present: define R as the set of

indices of present ri. Any measurement can be expressed

as:

˜

wk= wk+ sk (3)

Where sk is a sensor error. It is a stationary stochastic

process with power spectral density Φsk(ω) = λk (i.e.

white noise). The proposed dynamic network has sensor noise, and no process noise. Every real sensor has some noise. This noise is presumed to originate from the internal workings of the individual sensors. Then every sensor will have a different error. Because networks have a large amount of measurements, it is important to deal with this explicitly. More complex noise frameworks will be investigated in future work. The following assumptions are made on dynamic networks:

Assumption 2.1.

• The network is well-posed in the sense that all principal minors of (I − G(∞))−1 are non-zero.

• (I − G)−1 is stable

• The measurement noise sources are independent white noise sources Φn= Λ = diag(λ1. . . λL)

• The external excitation signals are uncorrelated, i.e.: Rr1r2(τ ) = 0 ∀τ .

Define wN = [wk1. . . wkn]

T and G

jN = [Gjk1. . . Gjkn]

T,

where {k1. . . kn} = Nj. Any measurement ˜w can then be

written in either global or local form respectively as:

˜ wk(t) = X l∈R Skl(q)rl(t) + sk(t) (4a) ˜ wj(t) = X k∈Nj Gjk(q)wk(t) + rj(t) + sj(t) (4b)

Where Skl is the (k, l)th element of (I − G)−1.

2.2 The Prediction Error method

The prediction error method (Ljung (1999)) predicts the output with a one-step-ahead predictor. The prediction error is minimized to attain a model. The one-step ahead predictor: ˆ wk(t|t − 1) = X l∈R Skl(q, θ)rl(t) (5)

And the corresponding prediction error is:

εk(t) = ˜wk(t) − ˆwk(t|t − 1; θ) (6)

The unknown parameters are then estimated through a prediction error criterion based on a cost function VN:

ˆ θN = arg min θ VN(θ), (7a) VN(θ) = N −1 X t=0 ε2(t, θ) (7b)

Where VN(θ) is the sum of squared prediction errors.

Definition 2.1. An estimate Gjk(q, θN) is consistent if

Gjk(q, ˆθN) → Gjk(q), w.p. 1 as N → ∞

The variance of the estimate of the parameter vector θ in (7) is characterized by the following proposition:

Proposition 2.1. Suppose the Assumption 2.1 holds. As-sume also the data set is informative enough. Then the covariance matrix of θ denoted Pθ is (Ljung (1999)):

Pθ= M−1, (9a) M = ¯Eψ(t, θ0)Λ2(ψ(t, θ0))T, (9b) ψ(t, θ0) = ∂ε(t, θ) ∂θ θ=θ0 , (9c)

where ¯E is the mean over time and ensemble (Ljung (1999)) and ψ(t, θ0) represents the gradient of the

predic-tion error evaluated at θ0, and Λ is a diagonal matrix with

the noise powers. M Represents the information matrix. 2.3 The two-stage method

The two-stage method (Van den Hof et al. (2013)) at-tempts to obtain a consistent estimate of a target module in a dynamic network. Its (second stage) predictor inputs are (asymptotically) noise-free estimates of internal vari-ables. The two-stage method performs consecutive mini-mization of prediction errors.

In the first stage, the goal is to reconstruct the internal variables wk (4). For this purpose, estimates of Skl, k ∈

Nj, l ∈ R are estimated by minimizing the quadratic cost

function (7) of the prediction error: εk(t, α) = ˜wk(t) −

X

l∈R

Skl(q, α)rl(t) (10)

Where α is a parameter vector. Let Skl(q, ˆα) denote the

estimate of Skl. The estimate of wk is then:

ˆ wk( ˆα) =

X

l∈R

Skl(q, ˆα)rl(t) (11)

In the second stage, the estimates of the noise-free internal variables are used to identify the target module Gji in an

open-loop problem. Estimates are obtained by minimizing the quadratic cost function (7) of the prediction error:

19th IFAC World Congress

Cape Town, South Africa. August 24-29, 2014

(4)

εj(t, ˆα, β) = ˜wj(t) − rj(t) −

X

k∈Nj

Gjk(q, β) ˆwk( ˆα) (12)

The two-stage method is defined as follows. Algorithm 2.1. The Two Stage method

(1) Obtain estimates ˆwk of wk for each k ∈ N using (10)

and (7)

(2) Using ˆw(α) obtain estimates of the target module Gji

using (12) and (7)

Proposition 2.2. Consider a dynamic network as defined in section 2.1. Algorithm 2.1 provides consistent estimates of Gji if the following conditions hold:

• The power spectral densities of [ ˆwk1· · · ˆwkn], k∗ ∈ Nj ;

[rl1· · · rln], l∗∈ R are positive definite for ω ∈ [−π, π]

• The parametrization is chosen flexible enough such that there exists a parameter vector θ0such that Gjk(β0) =

Gjk and Skl(α0) = Skl

Notice that the two-stage method tackles the network identification problem as two sequential open-loop prob-lems. The parametrization of the two-stage method can be represented as a directed acyclic graph. This is shown in figure 1 and the following equations:

wN(t) = SN R(q)r(t) (13a)

wj(t) − rj(t) = GjN(q)wN(t) (13b)

Where wN = [wk1· · · wkn]

0, k

∗∈ Nj is a vector.

Fig. 1. The parametrization of the two-stage method can be represented as a directed acyclic graph (13). 2.4 Variance reduction technique for cascade systems In Wahlberg et al. (2009), a similar framework is investi-gated for a cascade systems. Using our notation:

˜

w2(t) = S21(q)r1(t) + s2(t) (14a)

˜

w3(t) = G32(q)w2(t) + s3(t) (14b)

It is possible to consistently estimate S21 using only ˜w2,

but Wahlberg et al. (2009) show that the variance of the estimate of S21 can be reduced by minimizing:

VN(θ) = 1 N N −1 X t=0 ε2(t, θ)2 λ1 +ε3(t, θ) 2 λ2 , where: (15a) ε2(t, θ) = ˜w2(t) − S21(q, α)r1(t) (15b) ε3(t, θ) = ˜w3(t) − G32(q, β)S21(q, α)r1(t) (15c)

Notice how the modified cost function utilizes an extra measurement. The information of Skl in ˜w3 is being

exploited. In our paper, no extra measurements are used. And the focus is on the target module Gji embedded

in a dynamic network rather than the direct open-loop identification of S. The question arises how this variance reduction technique can be modified for identification of the target module G embedded in a dynamic network.

3. SIMULTANEOUS MINIMIZATION OF PREDICTION ERRORS

In this section it is shown that the modified cost function presented in section 2.4 can be applied to dynamic net-works. We use it to link the prediction errors of the two-stage method together. Instead of sequentially minimizing the prediction errors in the two-stage method, they can be simultaneously minimized. To apply this reasoning, the results of Wahlberg et al. (2009) must be extended in two ways. It will be shown that the downstream module is also estimated with lower variance. And it will be shown that the results of Wahlberg et al. (2009) hold for directed acyclic graphs as well. It has already been presented that the two-stage method parametrization transforms a dy-namic network into a directed acyclic graph expression. Consequently, we combine these results, and can obtain estimates of modules embedded in a dynamic network (including loops) with lower variance than the two stage method. It uses the same parametrization as the two-stage method, but uses a similar cost function as Wahlberg et al. (2009). The method can be presented as following: Algorithm 3.1.

(1) Construct the prediction errors: εN(t, α) = [ ˜wN(t) − nr X p=1 SN p(q, αp)rp(t)]T (16) εj(t, θ) = ˜wj(t) − rj(t) − GjN(q, β) nr X p=1 SN p(q, αp)rp(t)

Where α is partitioned into [α1. . . αnr]

T, such that

SN p(q, αp). Note that these are the same prediction

errors as (10) and (12) for the two-stage method. And that both εN(t, α) and εj(t, α, β) are a function of

SN p(q, αp)

(2) Obtain estimates of Gjk(q) by minimizing:

VN(θ) = 1 N N X t=1 [εj(t, α, β) 2 λj + X k∈Nj εk(t, α)2 λk ] (17)

The simultaneous cost function and common parametriza-tion in Skl links the prediction errors of the two stages,

such that ˜wj(t) does get utilized in the estimation of Skl.

This results in a variance reduction of the estimate of Skl

by extending the reasoning of Wahlberg to multiple inputs. In this paper, it is shown that the variance of the estimate of the target module Gjk reduces as well.

Proposition 3.1. Under the conditions of Proposition 2.2, Algorithm 3.1 results in consistent estimates of G0

jk, Skl0.

The proof is presented in Gunes (2013).

Note that minimizing (17) is a non-convex optimization problem. However, the optimizer can always be initialized by using initial estimates and model orders attained by performing the two-stage method on the dataset first, be-cause the consistency conditions (Proposition 3.1) match.

(5)

4. VARIANCE EXPRESSIONS FOR SIMULTANEOUS MINIMIZATION OF PREDICTION ERRORS In this section, the variance expressions for simultaneous minimization of prediction errors will be presented. Con-sider a dynamic network as defined in Section 2.1 that satisfies Assumption 2.1 and the conditions of 3.1. In order to derive the covariance matrix of θ an expression of the prediction error gradient is required. The system equations have been presented in (4), and the prediction errors are presented in (16). Define ε = [εN εj] as a vector of the

prediction errors. The prediction error gradient is:

ψ(t, α, β) = −    ∂εN(t, α) ∂α ∂εj(t, α, β) ∂α 0 ∂εj(t, α, β) ∂β    (18)

Define αpas the parameter vector associated to the column

vector SN p, and rp. Define SN p0 =

∂SN p(q,αp)

∂αp

θ=θ0. Notice GjN is a row vector. Define G0jN = [

∂GTjN(q,β) ∂β ]

T

θ=θ0 for

ease of notation. The prediction error gradient blocks of (18) can then be written as:

−∂εN(t, α) ∂αp = [SN p0 (q)rp(t)]T (19a) −∂εj(t, α, β) ∂αp = [GjN(q)S0N p(q)rp(t)]T (19b) −∂εj(t, α, β) ∂β = G 0 jN(q) nr X p=1 SN p(q)rp(t) (19c)

Then M as defined in (9), can be partitioned according to α and β as: M = E[ψ(t, θ0)Λψ(t, θ0)H] (20a) =  A + F CH C D  , where: (20b) Apq= E[(SN p0 rp)TΛ−1N SN q0 rq] Fpq= E[(GjNSN p0 rp)Tλ−1j GjNSN q0 rq] C = [C1. . . Cnr] where, for p = 1, . . . , nr: Cp= E[G0jN(q)SN p(q)rp(t)λ−1j GjN(q)SN p0 (q)rp(t)] D = nr X p=1 EG0 jN(q)SN p(q)rp(t)λ−1j rp(t)SN pT (q)G0TjN(q) 

Note that simplifications have been performed using the non-correlation between the external excitation signals. I.e., the off-diagonal blocks of A and F are zero and D is symmetric. Recall P = M−1 (9). Using Schur’s complement, the following proposition can be set up: Proposition 4.1. Consider a dynamic network as defined in section 2.1. Suppose that the conditions of Proposition 3.1 are met. The variance expressions of θ = [αβ] using Algorithm 3.1 is: Psi=  Pαsi −Psi αC TD−1 −D−1CPαsi D−1+ D−1CPαsiCTD−1  Pαsi= [A + F − CTD−1C]−1

Where the superscript si indicates the use of the method

3.1. And where the top-left block is the covariance matrix of α and the bottom-right block is the covariance matrix of β: Psi

β . The other variables are defined in (20). Note that

to evaluate these expressions in practice, the noise powers λj and λk, k ∈ Nj need to be known.

5. VARIANCE EXPRESSIONS FOR THE TWO-STAGE METHOD

In this section, the variance expressions for the two-stage method for the current framework will be presented. Unlike Forssell and Ljung (1999) and Gevers et al. (2001), the variance expressions are independent of individual realizations of the first stage estimate due to the fact only sensor noise is present (and not process noise).

Proposition 5.1. Consider a dynamic network as defined in section 2.1. Suppose that the conditions of Proposition 2.2 are met. The variance expressions for α and β obtained using the two-stage method presented in Algorithm 2.1 are: Pα2S = A−1 Pβ2S = D−1+ 1 λ2 j D−1Q2D−1

Where A and D are defined in (20), and where Q2 is:

Q2= 1 2π Z π −π G0jN nr X p=1 SN pΦ¯d+ΦrpS H N pG0HjN dω ¯ Φd+(α) = GjN[ nr X q=1 SN q0 Pα2S qS 0H N qΦrq]G H jN

The proof is presented in appendix B. Again, to evaluate these expressions in practice, the noise powers need to be known.

6. VARIANCE COMPARISON

The variance expressions for the two-stage method (Algo-rithm 2.1) and the simultaneous minimization of predic-tion errors (Algorithm 3.1) are briefly presented in table 1. The goal is to achieve variance reduction of the target module estimate. Does the simultaneous minimization of prediction errors indeed result in target module estimates with lower variance than the two-stage does?

Define Z = F − CTD−1C. It is proven in Appendix A

that Z ≥ 0. Hence A−1≥ [A + Z]−1 and the simultaneous

minimization of prediction errors results in a lower and better Pα. Z also appears in Pβ, hinting to an improvement

of the variance of the target module estimate. In the following proposition it is shown that Pβ is also less.

Proposition 6.1. Consider the variance results summa-rized in table 1. The variance of estimates (including the target module) of simultaneous minimization of prediction

19th IFAC World Congress

Cape Town, South Africa. August 24-29, 2014

(6)

Table 1. Variance results summary (5.1, 4.1)

Method Two-stage method Simultaneous minimization

Pα A−1 [A + Z]−1

Pβ D−1+ D−1Q2D−1 D−1+ D−1C[A + Z]−1CTD−1

errors is equal to or smaller than the two-stage method ones. The proof is presented in Appendix C.

7. SIMULATION RESULTS

In this section, Monte Carlo simulation results are pre-sented to illustrate the results from the previous sections. The results are based on 200 Monte Carlo simulations of the dynamic network presented in Fig. 2. The noise is white with powers 0.03, 0.0001 and 0.03. The external excitation signals are white and unit power. The data size is 250. The results are presented in Fig. 3.

Fig. 2. Example dynamic network with target module G21.

It appears that both methods provide consistent estimates, but that simultaneous minimization of prediction errors results in estimates with lower variance.

Fig. 3. Magnitude plots of the estimates of G21. The thick

black line represents the true system.

8. CONCLUSION AND FUTURE WORK In this paper, a novel method for network prediction error identification has been presented. Simultaneous minimiza-tion of predicminimiza-tion errors is a combinaminimiza-tion of the two-stage method (Van den Hof et al. (2013)) and simultaneous minimization of prediction errors (Wahlberg et al. (2009)). It has the same consistency properties as the two-stage method, but considerably lower variance in the presence of measurement noise. The mechanism by which the new method achieves lower variance is by simultaneously min-imizing the set of prediction errors that are sequentially minimized in the two-stage method. In future work, the proposed method will be extended to deal with more general cases of available measurements. This has been

done in the literature for the two-stage method in Dankers et al. (2013). Other future work will be extending to cases with both measurement and process noise.

REFERENCES

Dankers, A.G., van den Hof, P.M.J., Heuberger, P.S.C., and Bombois, X. (2013). Identification of dynamic mod-els in complex networks with prediction error methods - predictor input selection. Submitted to IEEE Transac-tions on Automatic Control.

Forssell, U. and Ljung, L. (1999). Closed-loop identifica-tion revisited. Automatica, 35(7), 1215 – 1241.

Gevers, M., Ljung, L., and van den Hof, P.M.J. (2001). Asymptotic variance expressions for closed-loop identi-fication. Automatica, 37(5), 781 – 786.

Gunes, B. (2013). A novel network prediction error identification method. Master’s thesis, Delft University of Technology.

Ljung, L. (1999). System identification (2nd ed.): theory for the user. Prentice Hall PTR, Upper Saddle River, NJ, USA.

Van den Hof, P.M.J., Dankers, A.G., Heuberger, P.S.C., and Bombois, X. (2013). Identification of dynamic mod-els in complex networks with prediction error methods: basic methods for consistent module estimates. Auto-matica, 49(10), 2994 – 3006.

Wahlberg, B., Hjalmarsson, H., and Martensson, J. (2009). Variance results for identification of cascade systems. Automatica, 45(6), 1443 – 1448.

Willems, J. (2008). Modeling interconnected systems. In Communications, Control and Signal Processing, 2008. ISCCSP 2008. 3rd International Symposium on, 421– 424.

Appendix A. PROOF FOR Z  0 Recall Z = F − CTD−1C. Z  0 Is equivalent to:

 F CT C D  =E[y1y T 1] E[y1y2T] E[y2yT1] E[y2y2T]   0 (A.1a) y1(t) = [GjNSN 10 r1. . . GjNSN n0 rrnr] TΛ−0.5 2 (A.1b) y2(t) = G0jN nr X p=1 SN p(q)rp(t)Λ−0.52 (A.1c)

Which is the power spectral density of [y1(t) y2(t)] and

hence by definition positive semi-definite.

Appendix B. PROOF OF PROPOSITION 5.1 Consider the two-stage method presented in Algorithm 2.1. In the first stage, the parameter vector α is estimated using the prediction error (10). Using proposition 2.1, the covariance matrix of α can be derived to be A−1 ((20)).

In the second stage, GjN(q) is estimated as an open-loop

problem. Because its input is not available, the estimated input ˆwN (based on ˆα) is used instead. Rewriting the

(7)

mj(t) = GjN(q) ˆwN(t) + rj(t) + d(t) (B.1a) ˆ wN(t) = nr X p=1 SN p(q, ˆαp)rp(t) (B.1b) d(t) = nj(t) + d+(t) (B.1c) d+(t) = GjN(q) nr X p=1 SN p(q) − SN p(q, ˆαp)rp(t) (B.1d)

Where d is the predictor noise term. The prediction error is (12). The covariance matrix is Forssell and Ljung (1999):

Pβ= R−1Q1R−1+ R−1Q2R−1 (B.2a) R = 1 2π Z π −π G0jNΦwˆNG 0H jN dω (B.2b) Q = 1 2π Z π −π ΦdG0jNΦwˆNG 0H jN dω (B.2c)

Notice that nj(t) and d+(t) are uncorrelated because in

the framework we consider, nN is uncorrelated with nj.

Only the noise source nN affects SN p(q, ˆαp) and d+. As a

result, Φd= Φnj+ Φd+. Use this to split Q into Q1+ Q2:

Q1= 1 2π Z π −π ΦnjG 0 jNΦwˆNG 0H jN dω (B.3a) Q2= 1 2π Z π −π Φd+G0jNΦwˆNG 0H jN dω (B.3b)

First, obtain an expression for Φd+. From (B.1), Φd+ can

be expressed as:

Φd+(α) = GjN∆SN R(α)Φr∆SN RH (α)G H jN

Where ∆SN R(q, α) = SN R0 (q) − SN R(q, α). The external

excitation signals are uncorrelated, such that: Φd+(α) = GjN[ nr X p=1 ∆SN p(α)∆SN pH (α)Φrp(ω)]G H jN

Where Φrp(ω) is a scalar. For every element of this sum,

a first order Taylor approximation can be performed: SN p(ejω, αp) ≈ SN p(ejω, α0p) + SN p0 (e

, α0

p)(αp− α0p)

∆SN p(ejω, αp) ≈ −SN p0 (ejω, α0p)(αp− α0p)

The approximation holds for αp close to α0p. From (B.4),

using this approximation, it follows that: ∆SN p(ejω, αp)∆SN pH (e jω , αp) = (B.6) SN p0 (ejω)(αp− α0p)(αp− α0p) TS0H N p(ejω) (B.7)

Φd+(ejω, α) Depends on the realization of α. By averaging

over the ensemble of α, a useful measure ¯Φd+(ejω, α) can

be attained. Notice the covariance of α is defined as: Pα2S = E[(αp− α0p)(αp− α0p)

T

] (B.8)

Substituting this expression into (B.6) leads to: ¯ Φd+(α) = GjN[ nr X p=1 SN p0 Pα2SpS 0H N p]GHjNΦrp(ω) (B.9)

In similar fashion, replace ΦwˆN by ΦwN (13). Also notice

that R−1QR−1= R−1λj, because Φnj = λj. Use R = λjD

(20). This leads to the expressions in Proposition 5.1.

Appendix C. PROOF OF PROPOSITION 6.1 Consider Proposition 6.1 as the lemma. Appendix A proofs Z  0, such that A + Z  A and [A + Z]−1  A−1. This

concludes the proof for Pα. For Pβ, the proof reduces to:

1 λ2

j

Q2 CPαsiC

H (C.1)

First, the proof will be presented for the case of Z = 0, and afterwards a generalization to Z  0 will be presented. For Z = 0, Psi

α = [A + 0]−1= Pα2S, Pα. A and Pα are

block-diagonal. Then CPαsiCH= Pnr p=1CpPαsipC H p . Rewrite Q2: Q2= " nr X p=1 1 2π Z π −π G0jNSN pGjNSN p0 P 2S αpS 0H N pG H jN SN pT G0HjNΦ2rp dω # + " nr X p=1 nr,q6=p X q=1 1 2π Z π −π G0jNSN p GjNS0N qP 2S αqS 0H N qG H jNS T N pG0HjNΦrpΦrq dω #

Where Q2is split into two terms. Denote the second term

V . V Has the form KP2S αpK

TΦ

rpΦrq. Since Φrp, Φrq are

scalar magnitudes  0 and Pα2Sp  0: V  0. Next, define Ap = G0jNSN pGjNSN p0 ΦrpF , where F , Pαp = F F

T.

Using this notation, (C.1) can be rewritten as: 1 λ2 j nr X p=1 1 2π Z π −π ApAHp dω + V  (C.3a) nr X p=1 " 1 2π Z π −π Ap 1 λj dω #" 1 2π Z π −π AHp 1 λj dω # (C.3b) Consider the stronger lemma without the summer and V :

1 2π Z π −π ApAHp dω  " 1 2π Z π −π Apdω #" 1 2π Z π −π AHpdω #

Suppose F [ap(t)] = Ap(ejω), where F denotes the Fourier

transformation. Then: ap(t) = 1 2π Z π −π Ap(ejω)ejωtdω ap(0) = 1 2π Z π −π Ap(ejω)dω 1 2π Z π −π Ap(ejω)Ap(ejω)Hdω = ∞ X t=−∞ ap(t)aTp(t)  0 ∀t

The lemma (C.4) can then be rewritten as:

−1 X t=−∞ ap(t)aTp(t) + ap(0)aTp(0) + ∞ X t=1 ap(t)aTp(t)  ap(0)aTp(0)

Which holds. This concludes the proof for Z = 0. For the remaining cases Z  0 (A), the two-stage method vari-ance expressions 5.1 are unaffected, but the simultaneous estimation of prediction errors variance expressions 4.1 are strictly decreasing. The proof thus extends to any Z.

19th IFAC World Congress

Cape Town, South Africa. August 24-29, 2014

Cytaty

Powiązane dokumenty

Line 45–51: Conditional statement that defines the start of the ASCII character shift, for determining the map sheet identification number components for a given scale.. Line

1.1 , especially if one realizes that in the CM the degrees are independent, and the edges are not, whereas for instance in the GRG (and in the other two examples) precisely

The most important advantages resulting from the application of the new multi-criteria employee evaluation method created on the basis of the Analytic Hierarchy Process

Testing the need to include dichotomous variables in the model reflecting the selection of students to individual groups (early, delayed school entry to lower secondary school,

otwarta została placówka duszpasterska w Suwałkach (erygowanie domu zakonnego 26 listopada 1981 r.). został erygowany dom zakonny przy małej publicznej kaplicy 5. Zaraz

Results of turning circle maneuver at 350 rudder angle for a short full bodied ship as derived from computer simulations in comparison to the free running model test.

Aktywa niematerialne można podzielić na te, które mają oddzielny byt eko- nomiczny i prawny (np. relacje z klientami, reputacja). Aktywa niematerialne nie posiadające oddzielnego

Mało odporne na światło (po 5 h naświetlań (3,28*10 5 lux)spadek zawartości wynosi ok.. Perspektywy i trudności stosowania niebieskich barwników naturalnych : gardenii