• Nie Znaleziono Wyników

Robust control through signal constraints with application to predictive control

N/A
N/A
Protected

Academic year: 2021

Share "Robust control through signal constraints with application to predictive control"

Copied!
405
0
0

Pełen tekst

(1)
(2)
(3)

ROBUST CONTROL THROUGH SIGNAL CONSTRAINTS

WITH APPLICATION TO PREDICTIVE CONTROL

Proefschrift

ter verkrijging van de graad van doctor aan de Technische Universiteit Delft,

op gezag van de Rector Magnificus prof. dr. ir. J.T. Fokkema, voorzitter van het College voor Promoties,

in het openbaar te verdedigen op maandag 18 december 2006 om 12.30 uur door Rob Anton Jurjen DE VRIES

(4)

Toegevoegd promotor: Dr. ir. A.J.J. van den Boom

Samenstelling promotiecommissie: Rector Magnificus, voorzitter

Prof. dr. ir. M. Verhaegen, Technische Universiteit Delft, promotor Dr. ir. A.J.J. van den Boom, Technische Universiteit Delft, copromotor Prof. dr. R. Babuska, Technische Universiteit Delft

Prof. dr. ir. R. De Keyser, Universiteit Gent

Prof. dr. ir. P.P.J. van den Bosch, Technische Universiteit Eindhoven Prof. dr. ir. A.C.P.M. Backx, Technische Universiteit Eindhoven

Copyright c 2006 by R.A.J. de Vries.

Printed in The Netherlands by PrintPartners Ipskamp. ISBN-10: 90-9021429-1

(5)
(6)
(7)

Contents

1 Introduction 1

1.1 Predictive control . . . 3

1.2 Robust control . . . 5

1.2.1 Models for uncertain systems . . . 5

1.2.2 Basic principles . . . 6

1.2.3 Current approaches . . . 8

1.2.4 Ideal robustness constraints . . . 14

1.3 Contributions of this thesis . . . 14

1.4 Outline . . . 17

2 Preliminaries 21 Part I: Basic concepts and notation . . . 21

2.1 Signals . . . 21

2.2 Systems . . . 23

2.2.1 General systems and operators . . . 23

2.2.2 Induced norms and p-gains . . . 25

2.2.3 Stability properties . . . 29

2.3 Models for uncertain systems . . . 32

2.3.1 Uncertainty description for linear systems . . . 32

2.3.2 Uncertainty description for non-linear systems . . . 32

2.3.3 Obtaining the models for uncertain systems . . . 35

2.3.4 Reasons for using 1-norm bounded perturbations . . . 35

Part II: New concepts . . . 36

2.4 Type n stability . . . 36

2.4.1 Type n exponential stability . . . 36

2.4.2 Type n strict exponential stability . . . 40

2.4.3 Typen asymptotic stability . . . . 41

2.4.4 Typen strict asymptotic stability . . . . 42

2.5 Pulse response maps and operators . . . 42

2.6 Semi-linear upper bounds (SLUs) . . . 47

2.7 Basic results . . . 49

2.7.1 Existence of semi-linear upper bounds . . . 49

2.7.2 Use of SLUs and absolute pulse response operators . . . 53

2.7.3 Robust stability of type n exponentially stable NLTV systems in feedback . . . 55

(8)

3 RBC: Basic principles 59

3.1 Outline . . . 59

3.2 System based approach to robust control . . . 60

3.2.1 Recapitulation of the system based approach . . . 60

3.2.2 Analysis of system based robustness constraints . . . 62

3.2.3 Conclusions . . . 63

3.3 Basic -stabilizing RBC constraint . . . 64

3.4 Ext 1 - Type zero strict exponential stability . . . 67

3.5 Ext 2 - Handling disturbances . . . 72

3.6 Ext 3 - Handling additional information . . . 77

3.7 Ext 4 - More general uncertainty structure . . . 82

3.7.1 Process description . . . 83

3.7.2 Generality of the commonly perturbed LTI process . . . 84

3.7.3 Extension for explicitly stable perturbations . . . 86

3.7.4 Conclusions for explicitly stable perturbations . . . 95

3.7.5 Extension for non-explicitly stable perturbations . . . 96

3.7.6 Conclusions for non-explicitly stable perturbations . . . 98

3.8 Ext 5 - Handling physical constraints . . . 99

3.9 Ext 6 - Type zero strict asymptotic stability . . . 103

3.10 Ext 7 - Incorporating tuning objectives . . . 103

3.10.1 Optimality definition and tuning objectives . . . 103

3.10.2 Suitable RBC constraint realizations . . . 104

3.10.3 Suitable realization of the used APROs . . . 105

3.11 Conclusions . . . 107

4 Type zero RBC for LTI processes 111 4.1 System description . . . 112

4.1.1 Definition . . . 112

4.1.2 Properties . . . 116

4.1.3 Summary . . . 120

4.2 Description of type zero RBC entities . . . 120

4.2.1 Definition of type zero RBC entities . . . 120

4.2.2 Interpretation, properties and initialization . . . 126

4.2.3 Summary . . . 127

4.3 Type zero RBC . . . 127

4.4 Type zero RBC for unstable processes . . . 130

4.5 Extensions . . . 133

4.5.1 Simple NLTV perturbations . . . 133

4.5.2 State-space descriptions . . . 135

4.6 Implementation and tuning . . . 136

4.6.1 Implementation for general controllers . . . 137

4.6.2 Tuning for general controllers . . . 138

4.6.3 Implementation in MBPC . . . 149

4.6.4 Tuning in MBPC . . . 154

4.6.5 Overview of tuning rules . . . 154

4.7 Simulation examples . . . 165

(9)

Contents ix

4.7.2 Example 4.7.2 . . . 179

4.8 Conclusions . . . 188

5 Type one RBC for LTI processes 191 5.1 System description . . . 192

5.2 Derivation of type one RBC . . . 192

5.2.1 Overview of type zero RBC . . . 193

5.2.2 Type one RBC approach . . . 193

5.3 Definition of type one RBC entities . . . 195

5.4 Type one RBC . . . 204 5.5 Implementation in MBPC . . . 207 5.6 Tuning guidelines . . . 208 5.7 Extensions . . . 211 5.8 Simulation examples . . . 213 5.8.1 Example 5.8.1. . . 214 5.8.2 Example 4.8.2 . . . 218 5.9 Conclusions . . . 219

6 Type zero RBC for NLTV processes 225 6.1 System description . . . 226

6.1.1 Definition . . . 226

6.1.2 Properties . . . 233

6.1.3 Generality and guidelines to derive the description . . . 242

6.1.4 Summary . . . 251

6.2 Description of type zero RBC entities . . . 252

6.2.1 Definition of type zero RBC entities . . . 252

6.2.2 Interpretation, properties and initialization . . . 255

6.2.3 Summary . . . 255

6.3 Type zero RBC . . . 256

6.4 Extensions . . . 259

6.4.1 When y∈ Sn y is not automatically guaranteed . . . 259

6.4.2 Asymptotically stable processes . . . 260

6.4.3 Unstable and state-space processes . . . 260

6.5 Implementation and tuning . . . 260

6.5.1 Implementation and tuning for general controllers . . . 260

6.5.2 Implementation and tuning for MBPC . . . 263

6.6 Type one RBC for NLTV processes . . . 267

6.7 Conclusions . . . 268

7 Conclusions and suggestions 271 7.1 General conclusions . . . 271

7.2 Suggestions for further research . . . 276

A Review of basic concepts 279 A.1 Systems . . . 279

A.1.1 Definitions and basic properties . . . 279

(10)

A.2 Models for uncertain systems . . . 285

A.2.1 Polytopic or multi-model paradigm . . . 285

A.2.2 Unstructured and structured uncertainty models . . . 286

A.2.3 Generality of uncertainty descriptions . . . 287

A.2.4 Uncertainty description selection for linear systems . . . 290

A.3 Robust control . . . 291

B Supplement to chapter 2 295 B.1 Generality of type n exponential stability . . . 295

B.2 Generality of strict exponential stability . . . 297

C Supplement to chapter 4 299 C.1 Generality of the feedback system . . . 299

C.2 Properties of the RBC entities . . . 301

C.2.1 Interpretational reference guide . . . 302

C.2.2 Properties of the signals in part B of definition 26 . . . 303

C.2.3 Properties of the signals in part C of definition 26 . . . 304

C.3 Initialization of the RBC constraint . . . 309

C.4 Tuning guidelines for general controllers . . . 310

C.4.1 Guidelines to obtain the process related entities . . . 311

C.4.2 Choice of the RBC constraint . . . 312

C.4.3 Design and realizations of the tuning parameters . . . 313

C.4.4 Conclusions . . . 319

D Supplement to chapter 5 321 E Supplement to chapter 6 323 E.1 Properties of the RBC entities . . . 323

E.1.1 Interpretational reference guide . . . 323

E.1.2 Properties of the signals in part B of definition 41 . . . 324

E.1.3 Properties of the signals in part C of definition 41 . . . 325

E.2 Initialization of the RBC constraint . . . 326

F IMC-approach 327 G Proofs 333 G.1 Proofs of chapter 2 . . . 333

G.1.1 Proof of lemma 2 (page 28) . . . 333

G.1.2 Proof of proposition 2 (page 38) . . . 334

G.1.3 Proof of observation 1 (page 38) . . . 337

G.1.4 Proof of lemma 5 (page 49) . . . 337

G.1.5 Proof of theorem 2 (page 49) . . . 338

G.1.6 Proof of corollary 3 (page 51) . . . 339

G.1.7 Proof of observation 2 (page 52) . . . 339

G.1.8 Proof of lemma 6 (page 55) . . . 339

G.1.9 Proof of lemma 7 (page 55) . . . 340

G.1.10 Proof of theorem 3 (page 55) . . . 343

(11)

Contents xi

G.2.1 Proof of lemma 9 (page 73) . . . 344

G.2.2 Proof of theorem 5 (page 75) . . . 344

G.2.3 Proof of corollary 5 (page 76) . . . 344

G.2.4 Proof of lemma 10 (page 87) . . . 345

G.2.5 Proof of lemma 11 (page 87) . . . 345

G.2.6 Proof of theorem 6 (page 100) . . . 346

G.2.7 Proof of lemma 13 (page 101) . . . 347

G.3 Proofs of chapter 4 . . . 347

G.3.1 Proof of lemma 21 (page 303) . . . 347

G.3.2 Proof of lemma 23 (page 305) . . . 348

G.3.3 Proof of lemma 24 (page 306) . . . 349

G.3.4 Proof of lemma 25 (page 306) . . . 350

G.3.5 Proof of lemma 26 (page 308) . . . 351

G.3.6 Proof of theorem 7 (page 128) . . . 353

G.3.7 Proof of theorem 8 (page 128) . . . 353

G.4 Proofs of chapter 5 . . . 361

G.4.1 Proof of proposition 13 (page 198) . . . 361

G.4.2 Proof of theorem 9 (page 205) . . . 361

G.4.3 Proof of theorem 10 (page 205) . . . 362

G.4.4 Proof of proposition 14 (page 206) . . . 363

G.4.5 Proof of corollary 10 (page 206) . . . 365

G.5 Proofs of chapter 6 . . . 366

G.5.1 Proof of lemma 20 (page 238) . . . 366

G.5.2 Proof of lemma 28 (page 324) . . . 369

G.5.3 Proof of theorem 11 (page 257) . . . 370

G.5.4 Proof of theorem 12 (page 257) . . . 370

(12)
(13)

Chapter 1

Introduction

A

controller that can handle unexpected process behavior is desirable, because our knowledge of the process to be controlled is often flawed. For this reason, robust control has been the subject of much research. Of course, there is a limit to the amount of unexpected process behavior that can be handled, making some knowledge about these possible discrepancies necessary.

To design a robust controller one needs the following ingredients, as illustrated in figure 1.1: A nominal model which describes the basic behavior of the process. A description of the model uncer-tainty which specifies the set of possible perturbations from the nominal model. Design parameters which specify the desired behavior of the controlled process.

Desired process output -Controller - Process ? Disturbances -6 ? Controller mapping or output Controller Design -Design parameters ? Nominal model ? Model uncertainty knowledge

Figure 1.1: Robust controller design.

In general, the controller is designed by the specification of a criterion function and a way to minimize it. In some cases it is minimized with respect to the controller parameters, making it a system based approach because the controller mapping is determined explicitly. In other cases, it is minimized with respect to the controller output, making it a signal based approach because the controller mapping is not actually determined, only its output signal is.

Robustness is obtained by specifying system based robustness constraints on the realization of the controller or signal based robustness constraints on the controller output. In some cases these robust-ness constraints are incorporated in the criterion function. This usually leads to the so called min-max

(14)

approach, in which robustness is obtained by minimizing the worst case realization of the criterion function. In other cases the criterion function is minimized subject to the robustness constraints. Of course, mixtures of these two approaches are also used.

The design parameters in figure 1.1 usually consist of weighting filters in the criterion function, a specification of the desired process output, signal constraints and, possibly, robustness constraints.

Basic problems in robust controller design

A number of closely related key problems in robust controller design are listed below.

Conservatism. In general, the min-max and system based approaches lead to conservative controllers.

This is inherent to min-max approaches because they optimize robust (worst case) performance over nominal or true (actually obtained) performance. Many (min-max) system based approaches aim at obtaining a fixed controller realization that performs well for all possible process realizations, which adds to the conservatism of the controller in general.

One way to reduce conservatism is to use a nominal criterion function subject to signal based robustness constraints. These constraints can usually be made unconservative by letting them only guarantee a very minimal robust performance. Because they are evaluated on-line, they can also take the true behavior of the process into account, which is often much better than its worst case behavior.

Incorporation of model uncertainty knowledge. The more one knows about the model uncertainty,

the less conservative the robust controller can become. However, a major problem is to optimally incorporate all possible knowledge about the model uncertainty in a transparent way while still ob-taining a tractable minimization problem.

Restrictive assumptions. Some approaches are explicitly tailored to optimally incorporate some

specific knowledge and/or properties of the model uncertainty. However, they consequently assume that this information can always be obtained and/or that the model uncertainty has these specific properties. This considerably restricts their general applicability.

All approaches will, of course, make some general assumptions with respect to the nominal model, the process and/or the controller. E.g. that they are linear and time-invariant (LTI). These assumptions can also significantly limit generality.

Signal constraints. In practice, there are almost always constraints present on the input and often also

on the output (or states) of the system. Though most signal based approaches are especially tailored to incorporate these constraints in a straightforward and optimal way, this is not the case for many systems based approaches.

Minimization. In many cases a difficult, often non-linear minimization problem results. This

mini-mization problem usually becomes more complex when more information about the perturbations is incorporated in the approach. Consequently, one often can not guarantee that the global minimum is found, resulting in possibly far from optimal solutions.

Feasibility. The more complex the minimization problem becomes, the harder it becomes to guarantee

that there always exists a feasible solution that gives the desired robust performance (especially in the presence of other, e.g. physical and safety constraints).

(15)

1.1. Predictive control 3

Transparent tuning. Although less important from a pure theoretical point of view, transparent and

easy tuning of the robust controller is very important from a practical point of view. No matter how good an approach might be in theory, transparent tuning more or less determines whether or not the approach will be used in practice and not only by a few experts.

The influence of the design parameters on the behavior of the controlled process can be difficult to predict in some approaches, because of their complexity. In some other cases, the optimal choice of the design parameters given the available information about the process is a discipline on itself. This not only makes these approaches hard to implement, but also difficult to use them to their fullest potential.

Contribution of this thesis

In this thesis we have tried to develop an approach that circumvents the problems listed above. Starting from the most basic problem, more and more complex problems and situations are considered. At every step special precautions are taken to circumvent the basic problems as much as possible. This often comes at the cost of introducing other problems or restrictions, which makes the approach suited for some applications, while other approaches are more suited for other applications.

The resulting robustness constraints can be used in combination with any controller design strat-egy. However, they are especially suited to be used in combination with model based predictive control (MBPC), as is explained in the next section.

Outline of this chapter

In the next section we will discuss MBPC and why it is chosen as the basic controller design strategy in this thesis. In section 1.2 we will delve a bit further into the basic principles of robust control. Then an overview of the (in the context of this thesis) state of the art in robust control is given.

It is important to have a clear understanding of what we would actually like to achieve. We will therefore specify the properties of, in our view, ideal robustness constraints, being realistic or not.

In section 1.3 the contribution of this thesis to the field of robust control is discussed in more detail, followed by an outline of this thesis in section 1.4.

1.1 Predictive control

The concept of model based predictive control (MBPC) was introduced simultaneously by Richalet [79] and Cutler and Ramaker [20] in the late seventies. Since then it has shown to be very successful both in theory and in practice [80]. This last property is not surprising because predictive control is one of the few advanced control methods that originated from industry.

The basic concept of predictive control is illustrated by figure 1.2. Based on a model of the process, its output (y) is predicted at the current sample time k over the prediction horizon (Hp) as a function of the future controller outputs (u). Then a criterion function specifying the desired future behavior of the process is minimized with respect to the future controller outputs, usually subject to signal constraints. The optimal value of u at time k is then applied to the process and the whole procedure is repeated at the next sample time, according to the receding horizon principle.

(16)

- time r ˆy y u future past k−1 k+1 k+Hp

Figure 1.2: Predictive control concept.

Δu(k + j) = 0 for 0 < Hc≤ j ≤ Hp− 1 |u(k + j)| ≤ uupp(k + j) for 0≤ j ≤ Hp− 1 |Δu(k + j)| ≤ (Δu)upp(k + j) for 0 ≤ j ≤ Hp− 1

where ˆy denotes the predictions of the process outputs, r the future reference trajectory, ρ some positive, scalar weighting and Δ the discrete difference operator. The control horizon Hc is one of the most important tuning parameters in MBPC; the future controller values calculated at time k are assumed to remain constant after Hcsamples.

The given level and rate constraint on u will be present in almost all practical situations and are also considered in this thesis.

Every predictive control problem consists of the specification of a process model, a criterion function and, in all practical situations, signal constraints. Because of the many degrees of freedom in this problem specification, there exists many different predictive controllers which are all based on the same concept but solve different predictive control problems. The most famous one being generalized predictive control (GPC), introduced by Clarke in 1987 [17] and extended to the multivariable case by Kinnaert in 1989 [49].

In 1992 Soeterboek [91] unified the most important predictive control methods by using a unified process model and criterion function, that could realize most of these control problems for single input, single output (SISO) processes. This approach was extended to the multivariable case in [25], [26], [27]. Both approaches use input-output models. Using state-space models, the unification process en-tered by Soeterboek was carried one step further in [29] and [96] by formulating a standard predictive control problem, which consists of one extended process description.

In practice a predictive controller can usually be tuned quite easily to give a stable closed-loop and to be robust with respect to model mismatch. However, even a feasible, nominal stability guarantee was not available for quite some time. Only in the early nineties did a quite general nominal stability (and feasibility) theory emerge.

(17)

1.2. Robust control 5

objective function or the use of very strict end-point constraints, by using the following measures ([7], [66]): A terminal cost (which defines a separate weighting of ˆy(k + Hp) − r(k + Hp) in the criterion function), a terminal stability constraint (which defines a region in which y(k + Hp) must reside) and/or a ”local control policy” (which defines the control actions beyond the prediction horizon). The disadvantage of these approaches is that the influence of these measures on the obtained performance and feasibility can be quite hard to predict.

The underlying reasoning behind the approaches that guarantee nominal stability is as follows. Nominal stability can be guaranteed in a feasible way by making sure that at every sample time k there exists a feasible realization of u(k + j) for all j ≥ 0, such that the criterion function with Hc = Hp = ∞ remains bounded when the disturbances and the reference trajectory are constant beyond the chosen control horizon.

Reasons for choosing MBPC as basic controller

From a general point of view, there are quite some reasons to use MBPC. It is one of the few methods that can easily handle signal constraints in a systematic and optimal way. Moreover, it is a very open, flexible and transparent methodology. This shows itself in the fact that no specific controller structure is enforced and that complex processes (e.g. multivariable or nonlinear) can be controlled without much special precautions or theoretical background. Furthermore, it can easily be made adaptive and is easy to tune for good nominal, true or robust performance. The tuning can even be automated by using the accuracy of its own predictions.

The robustness constraints developed in this thesis can be used in combination with any controller. However, also in the context of this thesis there are many reasons to use MBPC. One reason is that MBPC can take the corrective actions that the robustness constraints dictate into account in a straight-forward way. This will give better results than when these constraints are enforced on the output of a controller that can not do this.

Another reason to use MBPC is that the tuning of the presented robustness constraints is closely related to the tuning of MBPC, with similar effects on the controlled system. This is no coincidence, because many ideas from MBPC are used in their development. This makes it especially easy to tune these constraints in an optimal way with respect to the goal of an “inner-loop” MBPC, and to further orchestrate their effect to optimize nominal, true or robust performance.

1.2 Robust control

In this section we will briefly discuss the subject of robust control. First we will discuss how we can model uncertain systems to obtain two of the basic ingredients needed for the design of a robust controller: A model of the process and some description of how far the process might deviate from it. Then we will present the basic principles used to guarantee robustness, followed by a discussion of the most important robust controllers to date. We finish by specifying desired properties of robust controllers.

1.2.1 Models for uncertain systems

(18)

Polytopic or multi-model descriptions. In this description the true process is assumed to be some linear combination of a set of candidate models. For example, suppose that we have N input-output data sets for a system representing the different circumstances in which it operates. Each data set will present us with one possible model Gi for the process. Then, it is quite likely that the true process P can be correctly modelled as follows:

P = N  i=1 λiGi= Gnom+ N  i=1 λi (Gi− Gnom) (1.2)

for some non-negative λ1, λ2, . . . , λN summing to one. So, when Gnom is taken as the nominal model, the possible deviations of P from Gnomare represented byλi(Gi− Gnom).

Structured uncertainty descriptions. In this description the nominal model is taken as the basis and the special ways in which the process might deviate from it are structured around it. An example is given in figure 1.3, where the true process P = (I + Ω)G in which G is the nominal model and Ω the model uncertainty. -u G - e -y - Ω ?

Figure 1.3: Output multiplicative model uncertainty structure

Our knowledge about how and how much the process might deviate from G, is represented by our knowledge of the structure of the deviations and the possible realizations of Ω. The minimally required knowledge about Ω is usually an upper bound on its 1- or H∞-norm (for a discussion of norms, see section 2.2.2).

In this thesis we will assume that an 1-norm bounded structured uncertainty description of the process is available. How it is obtained falls beyond the scope of this thesis (see e.g. [22], [33], [45] and [76]). Since a polytopic description can be transformed to a structured one (see e.g. section A.2.3), the availability of a polytopic description is also sufficient.

The reasons for using the 1-norm are explained in chapter 2. The most important ones are that it is relatively easy to obtain and that it is the most natural choice when one works in the time-domain and in the presence of signal constraints. Its main disadvantage is that it is usually quite conservative.

1.2.2 Basic principles

We have shortly discussed how we can model uncertain systems. The remaining question is how we can guarantee that the controlled, uncertain system will be robustly stable. There are two basic tools that can be used for this. The “small-gain theorem” and “contraction constraints”, which will be discussed below.

(19)

1.2. Robust control 7

For example, consider the feedback system in figure 1.4 where we used the structured uncertainty description of figure 1.3 and ξ denotes some external disturbance. Assume that it is known that Ω1< . Then this feedback system will be BIBO stable when the 1-norm of the controller mapping

Qsatisfies

Q1<1/ (1.3)

which is known as the 1-robust stability constraint.

The small-gain theorem is very powerful since it is very simple and transparent, valid for mul-tivariable, non-linear, time-varying systems and requires minimal hard information about the model uncertainty; only an upper bound on e.g. its 1-norm is needed.

-u G - c - Ω ?- c? ξ -y ?+ c - G - Q

Figure 1.4: Typical uncertain feedback system.

Contraction mapping principle. Roughly speaking, this approach guarantees that some (Lyapunov) function of the signals in the feedback system is a contraction mapping: It remains bounded when the external signals (e.g. the disturbances and the reference trajectory) remain bounded, and converges to zero when the external signals obtain there steady-state behavior (which can be zero, constant, ramp-wise, etc. depending on the approach used). This function must be such that when it remains bounded all signals in the system are guaranteed to be bounded and that when it converges to zero, perfect tracking is obtained.

The guarantee that a contraction mapping exists can be obtained in two ways.

• By specifying some “explicit contraction constraint” that must be satisfied in the minimization problem.

• By choosing a criterion function that behaves as a contraction constraint when minimized. To guarantee robust stability, one has to guarantee that there exists a contraction mapping for all possible perturbations of the process.

An example of the use of contraction constraints is the approach described in section 1.1 to guaran-tee nominal stability in MBPC. It basically guaranguaran-tees that the nominal “extended” criterion function is a contraction mapping.

(20)

1.2.3 Current approaches

In this section we will discuss the, in the context of this thesis, most important robust control ap-proaches that are available to date. The basic principle of each approach will be given, followed by its pro’s and con’s.

The overview presented here is not intended to be a thorough survey of the current state of the art in robust control, because many well written surveys are already available. Its main purpose is to sketch the different viewpoints from which robust control problems can be tackled.

For a survey on robust model predictive control one is referred to [7]. An overview of the ap-proaches based on the Youla-Kucera parametrization is given in [2].

The available robust control approaches can roughly be divided in those using a system- and those using a signal based approach. Almost all system based approaches are based on the small-gain theorem, while most signal based approaches use contraction constraints.

System based approaches

H∞- and 1-robust control. These are the most well-known system based approaches and have been

the subject of much research for many decades. In these approaches the controller is first structured in a specific way. Then performance is specified by defining those mappings in the system of which the H∞- or 1-norm should be as small as possible. These mappings are gathered in a criterion function which is minimized (over all possible perturbations) with respect to the controller mapping, subject to the robust stability constraint obtained from the small-gain theorem.

For example, consider the system in figure 1.4. Then it is quite natural to try to minimize the 1-norm of the mapping from the disturbances ξ to the process output y. This leads to the following, often used, criterion function:

max Ω1<1/



W(I + GQ)−1(I − ΩQ)−1

1 (1.4)

where W is some weighting filter. As is often the case in these approaches, the robust stability constraint (1.3) is incorporated in (1.4) and one can simply say that if (1.4) can be made smaller than one, robust performance is obtained.

TheH∞- and 1-robust control approach for LTI systems has been well researched and a large body of results and insights is available, which can be found in any major textbook on this subject (e.g. [22] and [69]). A tutorial onH∞-control is given in [56] and a further discussion of the basic principles is given in section A.3. Extensions of H∞- and 1-robust control to NLTV systems can be found in ([22], [46], [85]).

H∞- and 1-robust control theory provides a theoretical framework for dealing with robust perfor-mance issues. However, standardH∞- and 1-robust control methodologies have several drawbacks [62]: the resulting controllers are conservative, the resulting optimization problems are difficult to solve, the controller is required to be structured in a specific way and hard input and output con-straints can not be added to the controller design procedure in a straightforward manner.

(21)

1.2. Robust control 9

be circumvented by computing an upper bound on the structured norm, this introduces conservatism back into the solution.

Mixed objective approaches. In 1-robust control, all objectives (like e.g. robust stability, (robust) performance, signal constraint satisfaction, etc.) are formulated as 1-norm constraints on the maps in the controlled system, as discussed earlier. In practice, we often want to meet several different objectives, formulated in different ways. These are called mixed objectives. Examples of mixed objectives are as follows:

• Find a controller that minimizes a nominal LQG-criterion function, subject to the condition that the feedback system is 1-robustly stable. This problem is known as the mixed LQG/1- or mixedH2/1-robust control problem.

• Find a controller that minimizes a nominal MBPC-criterion function at every sample time k, subject to signal constraints and the condition that the feedback system is 1-robustly stable. This problem is known as the mixed MBPC/1-robust control problem.

The basic design approach taken in most mixed objective system based approaches is as follows: First the controller is structured by using the set of all nominally stabilizing controllers (see e.g. figure 1.4 and section 2.2.3). Then the desired criterion function is minimized with respect to the “free part” of the controller mapping Q, subject to (small-gain based) robustness constraints on Q.

There are many examples of mixed objective approaches in both LQG control and MBPC.

One example of a mixed MBPC/1-robust control approach was developed by the author of this thesis in close cooperation with Ton van den Boom. We decided to work on this approach because it seemed to offer the following two main advantages: Easy extension to non-linear systems (because of the validity of the small-gain theorem) and easy incorporation of the fast knowledge and results from 1- andH-robust control.

The history of the development can be found in [28], [93] and [94], and the accumulation of the main results in [96]. It turned out to be quite difficult to exploit the possible advantages and to have the following disadvantages: A significant loss of transparency and a complex problem formulation, because the controller parameters are optimized instead of the controller output. Only robust BIBO stability was guaranteed. Although the theoretical extension to more advanced robust stability or per-formance guarantees is not too difficult, the minimization problem quickly becomes untractable: It can no longer be guaranteed that the global minimum of the resulting non-linear minimization problem can be found nor that a feasible solution will always exist. Other drawback were the usual conservatism (because the small-gain theorem is used) and the difficult incorporation of possible knowledge about the perturbations. Similar advantages and drawbacks exist for most other mixed objective approaches (see e.g. [44], [72] and [86]).

Indirect approaches. In the H∞- and 1-robust control approach, the controller realization is op-timized subject to a robust stability or performance constraint. Another approach is to compute a controller by any controller design method and then simply check whether or not it satisfies these robustness constraints. Or, to determine for which class of uncertainties the resulting controller will satisfy these robustness constraints. Examples of this approach for LQG control can be found in e.g. [43] and [44] and for MBPC in e.g. [34], [47] and [50].

(22)

of the design methodologies that lends itself to this approach, because they usually can be tuned quite easily to be robust with respect to model mismatch. Examples of the use of this “indirect approach” for MBPC can be found in [18], [59] and [91]. These indirect approaches can be seen as mixed objective approaches, but we have placed them in a separate class for clarity.

The indirect MBPC approach gives satisfactory results in the unconstrained case for LTI processes. However, in the constrained case and/or for NLTV processes, robustness analysis is much more diffi-cult, resulting in more complex and/or conservative tuning rules ([39], [104]). In both cases the major drawbacks are the limiting assumptions, the loss of transparency and the difficult optimal incorpo-ration of detailed model uncertainty knowledge which, combined with the questionable optimality of the tuning rules, introduces conservatism. Nonetheless, these approaches provide valuable insight in the “inherent” robustness of, and tuning rules that generally improve robust performance for, the specific controller design methodology.

Signal based approaches

Over the past decade significant progress has been made in signal based robust control, especially in connection with MBPC. Below we will discuss the most popular of these developments, which can be divided in two classes. The first ensures that the criterion function which is optimized at each sample time forms a contraction mapping for all possible models. The second defines some explicit constraint that forms a contraction mapping for all possible models.

Criterion function as contraction constraint. As shown earlier, in 1 andH∞-control a criterion function like (1.4) is minimized over all possible perturbations with respect to the controller realiza-tion. When smaller than one, robust performance is obtained while robust stability is guaranteed by the small-gain principle.

The purest signal based approach is very similar. A suitable criterion function is defined, which is guaranteed to form a contraction mapping when minimized with respect to the controller output over all possible perturbations. Thereby guaranteeing robust stability, while robust performance is obtained because the “worst case” criterion function is minimized. These approaches are called min-max approaches.

Early examples of min-max approaches in MBPC are [1], [23], [105] and [106]. These approaches where based on impulse or step response models with bounded errors on the Markov parameters. The disadvantages of a restrictive uncertainty description and a computationally intensive on-line mix-max optimization where circumvented by the approach introduced by Kothare et. al. [51] (see also [52]). This approach, which is based on state-space models, uses a polytopic uncertainty description and linear matrix inequalities (LMI [12]) to reduce the computational load of the optimization. General-izations of this approach have produced a wealth of approaches that extend the following concepts used to guarantee stability for nominal MBPC (see section 1.1 for details), to robust MBPC: terminal costs, terminal stability constraints and local control policies. A rough outline of the basic concepts of these approaches is given below and is largely based on [15] and [101].

Consider the following LTV state-space model

x(k + 1) = A(k) x(k) + B(k) u(k) (1.5)

y(k) = C x(k)

(23)

1.2. Robust control 11

by Ω, the convex hull of{(A1, B1), (A2, B2), . . . , (AN, BN)}: (A(k), B(k)) ∈ Ω, if and only if there exist μ1(k), μ2(k), . . . , μN(k) such that

A(k) = N  i=1 μi(k)Ai and B(k) = N  i=1 μi(k)Bi for any 0 ≤ μi(k) ≤ 1 and N  i=1 μi(k) = 1

This uncertainty description is known as a polytopic uncertainty description (see also section 1.2.1). Define the following state feedback control law

u(k) = K x(k)

and a (robust) invariant terminal set W which is such that the following holds for all x(k) ∈ W and for all i∈ [1, N]

• (Ai+ BiK) x(k) ∈ W

• (Ai+ BiK) x(k) and u(k) = K x(k) satisfy all state and input constraints, respectively Define ˜u(k) = [u(k), u(k + 1), . . . , u(k + Hp− 1)] and

J(k) = Hp−1 j=1 ˆx(k + j)TQˆx(k + j) + Hp  j=1 u(k + j − 1)TRu(k + j − 1) (1.6) + ˆx(k + Hp)TFˆx(k + Hp)

where ˆx(k + j) is the prediction of x(k + j) given information up and till time k, Q and R are positive definite real matrices and F is a symmetric positive definite real matrix. Define the following min-max optimization problem

min

˜u(k) (A(k),B(k))∈Ωmax J(k) (1.7)

subject to

u(k + j) = K ˆx(k + j) ∀ 0 ≤ Hc ≤ j ≤ Hp ˆx(k + Hp) ∈ W ∀ (A(k), B(k)) ∈ Ω

and subject to the constraints that u(k + j) satisfies all input constraints and that ˆx(k + j) satisfies all state constraints that are possibly present in the system for all j ∈ [1, Hp] and for all (A(k), B(k)) ∈ Ω.

Let the optimization problem be solved and let u(k) be applied to the system at every k. Then the state is controlled to the origin and the system is robustly stable for all (A(k), B(k)) ∈ Ω, when K and F = FT >0 satisfy the following equation for all i ∈ [1, N]

F − Q − KTRK− (Ai+ BiK)T F(Ai+ BiK) ≥ 0 (1.8)

The problem of finding a K and an F that satisfy (1.8) can be recast as an LMI based optimization problem for which fast and effective algorithms are available [12].

(24)

• The influence of K, F and W (which are interdependent) on the resulting performance and feasibility can be quite hard to predict.

• A polytopic uncertainty description is assumed to be available and to avoid a too conservative controller, the model uncertainty description should be rather accurate.

• The computational load of a min-max optimization problem like (1.7) is quite large and usually grows exponentially with the prediction horizon.

• The outlined approach can control uncertain systems without steady-state offset only at the origin and only when there are no persistent zero disturbances. Perfect tracking of non-zero setpoints in the presence of constant but non-non-zero disturbances is not guaranteed, even for LTI systems. The main reason for this is that the steady-state value of the input that ensures non-zero setpoint tracking, is unknown because the realization of the system is unknown. Below we will discuss the most important contributions that have been developed to circumvent the above given disadvantages.

In ([8], [9], [10], [48], [92]) it is shown that, given a polytopic uncertainty description, most ap-proaches can be recast as multi-parametric programs that shift most of the computational load to the off-line domain. The remaining on-line computational load is then usually small enough to allow practical application.

Casavola et. al. [16] extend the outlined approach to a norm bounded uncertainty description, instead of a polytopic one. By minimizing upper bounds on the (closed-loop) worst-case realization of the criterion function (1.6), they reduce the computational load such that the number of LMI’s grows only linearly with the prediction horizon Hp, instead of exponentially.

Grieder et. al. [42] reduce computational load while obtaining the largest possible robust invariant set together with the associated feedback law by using a nominal criterion function without a termi-nal set constraint. Robust stability is not guaranteed, but an atermi-nalysis procedure is proposed which establishes the range of uncertainties that will be stabilized by the resulting controller (making this an ”indirect” signal based approach).

To reduce the conservatism of the outlined approach in the presence of disturbances and state-constraints and to make the existence of a feasible solution more likely, the use of closed-loop pre-dictions is proposed in [6], [54] and [88]: it is assumed that u(k) = K x(k) + v(k) for all k and the minimization of (1.7) is done with respect to v instead of u.

Pannochia and Semino [74], [90] propose to increase controller robustness and to optimize the impact of K (and F and W ) on the performance in the following way. First choose F = 0, u(k +j) = Kˆx(k + j) for all j ≥ 0 and Hp = ∞. Solve (1.7) with respect to K instead of ˜u(k) without constraints. Then find an F as close as possible to Q that satisfies (1.8).

Langson et. al. [57] and Giovanini and Grimble [40] extend the above approaches by optimizing the feedback gains K together with the sequence v at every sample time.

(25)

1.2. Robust control 13

it uses a nominal model in the on-line criterion function (making it partly an explicit contraction constraint approach).

Another approach that guarantees non-zero setpoint tracking for stable perturbed LTI systems is given by Rodrigues and Odloak [82], which uses a special representation of the system that allows the use of an infinite horizon cost function in the min-max optimization problem.

Explicit contraction constraints. Min-max approaches that minimize the worst-case performance

cost have attracted most research. One disadvantage of min-max approaches is that they potentially yield conservative controllers. On the other hand, they do guarantee good performance for all possi-ble models. When all possipossi-ble models are equally likely, min-max approaches will provide the best results. However, in many applications there are clear indications to which model or set of models is the most likely to be encountered by the controller [40]. E.g. when one has a polytopic uncertainty description obtained by linearizing a non-linear model in different operating points, the current op-erating point indicates the most likely linear model. It is then likely that better performance of the true system can be obtained by using the most likely, possibly time varying, model in the criterion function. Robust stability should than be enforced by some explicit robust stability constraint.

The use of an explicit robust stability constraint is quite similar to the mixed objective system based approaches. An arbitrary criterion function is minimized with respect to the controller output, subject to an explicit contraction constraint that guarantees robust stability or even performance. For stable systems, Zheng [107] introduces the following stability constraint for the nominal case [7]

ˆx(k + 1)P ≤ λx(k)P, 0 < λ < 1 (1.9)

which forces the state to contract. When P is chosen as the solution of the Lyapunov equation ATP A P = −Q, with P and Q positive definite real matrices, then this constraint can always be met for some u(u(t + k) = 0 satisfies this constraint and any other constraint on u). Robust stability is achieved by requiring the state to contract for all possible systems. In other words, by maximizingˆx(k + 1)P in the constraint (1.9) over all possible system realizations.

Badgwell [3] formulated a number of explicit conditions or constraints that guarantee robust sta-bility using impulse and step response models. For the case of stable linear systems of which the model uncertainty is specified by a known finite set of possible models, Badgwell [4] proposes the following (quadratic) robust stability constraint for each plant i in the given set

Ji(˜u) ≤ Ji(˜u∗)

where Ji(˜u) is the criterion function for the model i at time k and ˜u∗is the shifted optimal sequence of controller output calculated at time k− 1 with u∗(k + Hp) set to 0.

Giovanini and Grimble [40] use closed-loop predictions, optimizing the feedback gain K and the free parameters v in the state feedback law u(k) = K x(k) + v(k) at every sample time, using a nominal model and an end-point constraint which requires the controller output and all possible plant outputs to be (close to) constant. Fukushima and Bitmead ([35], [36]) propose a similar approach, but transform the robustness constraints to sufficient linear constraints on v.

(26)

1.2.4 Ideal robustness constraints

Sofar we have discussed the basic problems in robust control, its basic principles and the benefits and problems associated with different approaches. This does not necessarily make it clear what we would actually like to achieve with robust control, which is important when discussing different trade-offs. So, before discussing the contributions of this thesis, we will list the properties of ideal (but probably unobtainable) robustness constraints as we see them.

Ideal robustness constraints have the following properties;

• They are valid for multivariable, non-linear, time-varying processes. They are capable of taking hard signal constraints into account in a straightforward and optimal way, and are guaranteed feasible.

• They are control method independent and easy to implement. That is, they can be implemented easily in any controller design method. They are linear with respect to the controller output and, hence, do not introduce a significant computational burden or local minima in a possible minimization problem.

• They are open and flexible, easy to specify, easy to interpret and easy to tune (off- and on-line) between good robust, nominal and true performance.

• They are as unconservative as possible. That is, they are necessary and sufficient to guaran-tee the specified (robust) performance for the actual, but unknown, realization of the model uncertainty.

• They only need the most basic knowledge about the model uncertainty, like e.g. a bound on its 1-norm and the fact that it has fading memory. However, they are capable of exploiting any additional hard (100% certain) or soft (less than 100% certain) information about the model uncertainty in a clear, transparent and optimal way.

• They allow easy incorporation of the fast knowledge contained in other, e.g. conventional (e.g. 1- orH-) robust control approaches.

We do not pretend that the robustness constraints presented in this thesis are ideal. Like all approaches, they have properties that are close to ideal and disadvantages that are far from ideal. Making the developed constraints suited for some applications while other approaches are more suited in other applications.

1.3 Contributions of this thesis

The problem tackled in this thesis is to guarantee that the controlled system has a certain minimal robust performance, independent of the controller design method, for perturbed processes subject to hard constraints on the controller output. The minimal robust performance that is guaranteed by the presented approach, is bounded-input, bounded-output stability and asymptotic perfect tracking of asymptotically constant references in the presence of asymptotically constant disturbances.

(27)

1.3. Contributions of this thesis 15

The approach presented in this thesis can be seen as a signal based approach with explicit robust-ness constraints. We have chosen for an explicit robustrobust-ness constraint because this offers the largest flexibility: Depending on the problem a nominal or min-max criterion function can be used without having to take additional precautions. Because the (min-max) criterion function does not have to satisfy certain conditions, one with transparent tuning effects can be selected.

The presented approach differs from most other approaches in the following ways. It uses an input-output description of the perturbed system with norm bounded uncertainties, but state-space and polytopic descriptions can be used when available. Contraction of (an upper bound on) the ro-bustness constraints is guaranteed by using the small-gain theorem, instead of the more usual use of a Lyapunov equation. The approach is applicable to any controller design methodology. Non-zero constant reference tracking is guaranteed in the presence of non-zero constant disturbances. Conser-vatism of the controller is reduced by not requiring a min-max optimization, by using the difference between the true and worst-case behavior of the process and by an approach that tries to “reconstruct” unmeasurable disturbances as good as possible. For stable processes, the robustness constraints are linear with respect to the controller output u. For unstable systems with a known, robustly stabilizing feedback controller C, u is parametrized as u = C(y)+v and the robustness constraints become linear with respect to v. When MBPC is used, the on-line computational load is comparative to conventional nominal MBPC: the number of (linear) robustness constraints grows linearly with the control horizon. The main disadvantages of the proposed approach compared to other approaches are as follows. • For non-zero reference tracking, two model descriptions are needed: one relating the input and

output (u, y) and one relating the increments of the input and output (Δu, Δy). This is a significant disadvantage for systems that are not LTI.

• Only hard, time-varying level- and rate constraints on the controller output are considered. All other signal constraints are assumed to be soft.

• Though on-line the robustness constraints can be transparently tuned between good nominal or robust performance with one or two tuning parameters only, they require many process depen-dent entities to be defined off-line.

Ways to circumvent some of these disadvantages are proposed for future research. A very rough outline of the control strategy developed in this thesis is given below.

Feasible, “worst case” controller outputs uminand umaxare calculated on-line. These “bounds” are such that any controller output that stays between them will satisfy the minimal robust performance requirements.

In an inner-loop one can design a controller that tries to control the (nominal or perturbed) process as good as possible. This controller can be completely arbitrary: it can be a simple PID-controller, a predictive controller, a NLTV fuzzy or neural network controller, an H-controller based on a reduced set of most likely perturbations, etc. The minimal robust performance can then be guaranteed by forcing the output of this controller to remain within the bounds umin and umax dictated by the “worst case outer loop controller”.

(28)

performance guarantee will invariably introduce one or more of the basic problems mentioned in the beginning of this chapter.

Although the process is assumed to be open-loop stable in most of this thesis, special care is taken that the results are easily applicable to unstable systems as long as a stabilizing controller is known. The basic concept behind the developed robustness bounds is very simple. Because we want easy to use and transparent bounds on the controller output u, we constrain the bounds on u at time k to be linear with respect to u(k). When the process is assumed stable, robust stability is guaranteed when uis guaranteed to remain bounded. This results in the following basic constraint

|u(k) − R(r, ξm, k)| H(k) |u(k − 1)|

umin(k) ≤ u(k) ≤ umax(k) umin(k) = −H(k) |u(k − 1)| + R(r, ξm, k) umax(k) = H(k) |u(k − 1)| + R(r, ξm, k)

where r denotes the reference trajectory and ξm the measured disturbance. The output of some nom-inally stabilizing feedforward controller is denoted by R(r, ξm, k) and H(k) denotes a time-varying, finite impulse response filter with non-negative ”taps”. As long as (1− H(k))−1 is stable, robust stability is guaranteed.

The desired realization of H is defined as follows: When the inner-loop controller results in a well behaved system, the resulting bounds are such that this controller output is feasible. If not, they restrain it in such a way that the resulting performance becomes acceptable.

In the rest of the thesis methods are developed to obtain suitable realizations of H, to “detect” as much of the disturbances as possible and to extend the robustness guarantee to e.g. guaranteed asymptotic tracking when the references and the disturbances become constant.

The developed theory is applied to MBPC and the final result is much like the general approach to guarantee nominal stability in MBPC given on page 5.

Robust stability with perfect tracking of a perturbed stable process can be guaranteed in a feasible way by using the following set of constraints.

• A constraint on a conservative ”mean-level like” feedback controller output, that guarantees robust stability when this controller output would actually be applied to the process.

• A constraint which guarantees the following.

The controller output u remains within a finite distance around the output ur of a robust controller which consists of a feedforward controller output and the robustly constraint mean-level like feedback controller output.

At every sample time k, it is made sure that there exists a feasible realization of u(k + j) for all j ≥ 0, such that u can converge to ur when the disturbances and the reference trajectory are constant in the future.

(29)

1.4. Outline 17

1.4 Outline

A pictorial overview of the chapters in this thesis is presented in figure 1.5. 1 Introduction ? 2A Basic concepts ? 2B Advanced concepts ?

3 Type zero RBC derivation

?

3.2 System based analysis

?

3.3 Basic BIBO RBC constraint

? ? ? ? ? ? ?

3.4 3.5 3.6 3.7 3.8 3.9 3.10 Extensions

?

4 General type zero RBC

?

5 General type one RBC LTI processes ⎧ ⎪ ⎨ ⎪ ⎩ ? ? 6 General RBC NLTV processes ?

7 Conclusions and suggestions

Figure 1.5: Overview and structure of the chapters.

The RBC constraints presented in this thesis are derived by using both signal- and system-based reasoning. The needed notation, concepts and definitions related to discrete-time signal and system theory are presented in the first part of chapter 2.

The second part of chapter 2 introduces the basic concepts used in the RBC constraints and presents a number of results that form the basis of the robust stability guarantees provided by the RBC constraints.

In chapter 3 the basic ideas underlying the new robustness constraints are introduced. Starting from an analysis of the system based robustness constraints, the first, basic, BIBO stabilizing RBC constraint is derived. Then more and more complex situations and objectives are introduced. By analyzing the different possible solutions, the best way to solve the resulting problems is determined leading to increasingly sophisticated RBC constraints.

(30)

The resulting constraints are called type zero RBC constraints because they guarantee BIBO sta-bility and the convergence to zero of all signals in the system when the external inputs become zero. Chapter 4 first presents the complete type zero RBC constraint for stable, multivariable, perturbed LTI processes. Then these constraints are extended to unstable processes and processes described in state-space form. After this, guidelines are derived on how to implement and tune the constraints, both for the case that an arbitrary controller is used and that a MBPC is used to actually control the process.

The type zero RBC constraints derived in chapter 3 and presented in chapter 4 are basically suited for disturbance regulation around a constant setpoint. The reason that most of this thesis is focussed on this problem, is that the derived principles are needed to tackle the more general problem of tracking asymptotically constant setpoint changes.

In chapter 5 the results of chapter 4 are extended to guarantee perfect tracking of asymptotically constant references in the presence of asymptotically constant disturbances. The resulting constraints are called type one RBC constraints.

The practical applicability of the results is demonstrated by simulation studies at the end of chapter 4 and 5.

Chapter 6 is set up similar to chapter 4, except that the results presented in chapter 6 are valid for NLTV processes.

At the end of chapter 6, the extension of the resulting type zero RBC constraints to type one RBC constraints is discussed. The exact extension is not presented in detail because of space and time limitations, and because it is very similar to the extension presented in chapter 5.

Overall conclusions and suggestions for further research are given in chapter 7.

Readers advice

In part I of chapter 2 basic notational conventions, general assumptions and some specific concepts relevant for this thesis are introduced. Even readers with a sufficient background in discrete-time signal, system and robust control theory are advised to read this part. Readers that are less familiar with these topics are also advised to read appendix A, which offers a quick review of the basic concepts in these areas.

To understand the RBC approach one should read part II of chapter 2 and, especially, chapter 3. Chapter 2 and 3 contain quite a lot of details. On a first reading it is therefore recommended to glance through these two chapters, focusing on the main ideas and conclusions while skipping many of the more detailed discussions and equations. To present the underlying ideas clearly, we have chosen not to accumulate the results presented in chapter 3: the best approach to handle every new situation is considered separately.

To understand how all ideas presented in chapter 2 and 3 come together one should follow with chapter 4, 5 and 6.

(31)

1.4. Outline 19

entities in the models should be obtained, is also discussed in these chapters because this is relevant for implementation.

Chapter 5 builds on the results obtained in chapter 4. Although it is mainly implementation ori-ented, it does explain the steps taken to extend the results from chapter 4 to type one RBC constraints. To implement and use the RBC constraints, one can suffice with reading chapter 4, 5 and 6 alone. On a first read it is advised to skip chapter 6, because it mainly deals with the NLTV model that is needed and how the entities in this model can be obtained. It does not present much news in regard to the RBC constraints: once the needed NLTV model is obtained, the RBC constraints form a quite straightforward extension of those presented in chapter 4 and 5.

The appendices supply background information on specific topics. Only when one is not familiar with a specific topic, e.g. the IMC approach, or when one wants to get additional insight in a topic, e.g. in the tuning rules presented in chapter 4, one should read the appropriate appendix.

Simulation software

(32)
(33)

Chapter 2

Preliminaries

I

n part I of this chapter we will introduce notational conventions, general assumptions and some basic concepts related to discrete-time control theory that are of special interest in this thesis. For those less familiar with discrete-time (robust) control theory a more thorough review of basic concepts is given in appendix A. Most of the theory presented in part I of this chapter and all the theory presented in appendix A can be found in basic textbooks on these subjects like e.g. [22], [55], [69] and [97].

In part II of this chapter we introduce a number of concepts that can be seen as extensions of some of the basic concepts presented in part I. These extensions form useful tools for analyzing and guaranteeing robustness of non-linear systems. This is illustrated by the presentation of some basic results that will play a key role in the rest of this thesis.

Part I: Basic concepts and notation

2.1 Signals

In this section we introduce some basic notions and definitions from discrete-time signal theory.

Notation: Throughout this thesis

IR will denote the set of real numbers

IRn will denote the set of real-valued n× 1 vectors IRn×m will denote the set of real-valued n× m matrices IR+ will denote the set of non-negative real numbers ZZ will denote the set of integers

ZZ+ will denote the set of non-negative integers

 will denote the set of real-valued time-sequences with the infinite time axis ZZ

+ will denote the set of real-valued time-sequences with the semi-infinite time axis ZZ+

k ∈ ZZ, will denote the sampling number: x(k) is the value of the signal x∈  at the k-th sampling time tkwith respect to some predefined sampling time t0

(34)

(x)i or xi; denotes the i-th element of the vector x

(X)ij denotes the element of the matrix X on the i-th row and j-th column (X)i: denotes the i-th row of the matrix X

(X):j denotes the j-th column of the matrix X XT denotes the transpose of the matrix X

1 denotes a vector of ones with appropriate dimensions I denotes the identity matrix with appropriate dimensions 0 denotes the zero matrix with appropriate dimensions

For any defined setX the supscript XnorXn×mdenotes a similar extension of this set as the corre-sponding extensions given for IR.

The matrix inequality A > B (A≥ B) with A and B both n × m real matrices, means that all elements of A− B are positive (non-negative).

Concatenated notations. To avoid the introduction of too many separate symbols, we will often use

a “concatenated” notation.

E.g. consider that we have a disturbance signal ξ, linear filters Q and Ω and an invertible linear filter V . Assume that by some operation we detect a part of the filtered disturbance QV−1ξ. Then instead of introducing a completely new symbol like e.g. ξdto denote this detected part, we will often denote it by e.g. (QV−1ξ)det. Similarly, a signal that represents the largest possible value of QV−1ξ for all possible realizations of the filter Q in a given set, will be denoted by e.g. (QV−1ξ)maxand an upper bound on the absolute value of the signal Qξ will be denoted by e.g. (Qξ).

Similarly, when we have a filter of which the impulse response forms an upper bound on the absolute value of the impulse response of the filter QΩV , we will denote this filter by e.g. QΩV instead of by a new general symbol like X.

These concatenated notations are used with the expectation that this gives more insight in the nature of the defined symbols than a new general symbol would.

Warning. The downside of using a concatenated notation is that it can be interpreted wrongly. E.g. on first glance one might interpret a signal like (QV−1ξ)max(k) to denote max(QV−1ξ(k)), which is not the case. Symbols like (QV−1ξ)det, (QV−1ξ)max, (Qξ), etc should be interpreted as separate signal definitions. The notation is only used to indicate that their definition has some close relation with a specific property of the signal QV−1ξ and Qξ, respectively. Similarly, symbols like QΩV should be interpreted as a separate filter definition, that will in some way be related to certain properties of the concatenated filter QΩV .

Definition 1 Norms of signals. The p-normx(k)pof an element x(k)∈ IRnis defined as x(k)p = n  i=1 |xi(k)|p 1/p for p∈ [1, ∞) x(k)∞ = max1≤i≤n|xi(k)|

(35)

2.2. Systems 23

Normed signal sets: The set p is defined as p := {x ∈  | xp <∞}

We will use the following sets in particular

 the set of all time-sequences with finite-amplitude 2 the set of all time-sequences with finite-energy 1 the set of all time-sequences with finite-action

The setSn

x : The signal setSxn⊂ ndenotes the set of the signal x: x∈ Sxn. The supscript n inSxn denotes that each instance x(k) of x is an n× 1 vector. When the supscript equals one, it is omitted. For example

Sn

x := {x ∈ n| x(k)∞≤ 1 ∀ k ∈ ZZ}

White signals: A stochastic signal x∈ n

+is white (and of unit intensity) if lim N→∞ 1 N N−1 i=0 x(i) xT(k + i) =  I if k = 0 0 otherwise

2.2 Systems

In this section we will introduce some basic notions and definitions from discrete-time system theory. Further details can be found in appendix A.

2.2.1 General systems and operators

Notation.

• Let a system with input u and output y be represented by the input-output (IO) map φ : Su Sy. Then the output y of the system is denoted by y = φ u and its output at time k is denoted by y(k) = (φ u)(k).

• The input u of a system often consists of different types of signals. When we want to make this explicit, we will write

u=



ua ub



where ua ∈ Sua and ub ∈ Sub. E.g. ua is a set of manipulated signals and ub is a set of disturbances. To simplify notation we will often use the following notational conventions

Cytaty

Powiązane dokumenty

Pozostale zabytki ukazywaly kolejne mozliwosci wykorzystania gliny jako surowca, ktôry przez tysiqce lat towarzyszyl ludziom nieomal w kazdej dziedzinie zycia; wyko-

Śledząc tę zm ianę, Iren eu sz O packi ukazuje, że rom antycy p rag n ę li ocale­ nia jednostkow ości, chw ilowych doznań, zauw ażali w ru in ie d estrukcję, nie

towego, tekst wyraźnie sygnalizuje, że kultura żydowska, z której wywodzi się Kamil, jest dla niej kulturą obcą („Noemi całą długą noc rozmyślała nad tą jego radością

Irrigation canals consist of several connected canal reaches, the inflow or outflow of which can be controlled using structures such as so-called overshot or undershot gates,

sele,:tief is voor benzeen. In destilla~iekolom Tl1 wordt. Deze 'Joeding is. programma voor multic0mponent distillation uit lit[16]. Hiervan is gebruik gemaakt naar

Abstract—Spreadsheets are used heavily in many business domains around the world. They are easy to use and as such enable end-user programmers to and build and maintain all sorts

Opacka: „Casus współczesny ’szkoły ukraińskiej’ -O dojew ski&#34;, „Poezja niepodległościo­ wa i poezja Legionów”, „Poezja polska przełomu XIX i XX

Teodor Parnicki samego siebie oraz swą twórczość zdawał się postrzegać przez pryzmat doświadczeń wielkiego romantyka i choć stworzył swój własny, niepowtarzalny język, a