• Nie Znaleziono Wyników

Fault management

N/A
N/A
Protected

Academic year: 2021

Share "Fault management"

Copied!
260
0
0

Pełen tekst

(1)

wim thijs

<\

'S

TR diss

1523

v_

J

(2)
(3)

FAULT MANAGEMENT

PROEFSCHRIFT

ter verkrijging van de graad van doctor aan de Technische Universiteit Delft,

op gezag van de Rector Magnificus, prof.dr. J.M. Dirken, in het openbaar te verdedigen ten overstaan van een commissie aangewezen door het College van Dekanen

op 22 januari 1987 te 16.00 uur

door

WILLEM LODEWIJK THEODOOR THIJS,

geboren te Breda, Werktuigkundig Ingenieur

<&CHNl§£

^

/&

{,<'

o „

~ p, eb CO 0.7)-"-U_ °';o-. -r-r'

c\

-i\ v--<£ ^ &/ V

TRdiss

A

1523

j

(4)

Samenstelling van de promotiecommissie:

Rector Magnificus Prof.dr.ir. H.G. Stassen

Prof.dr.ir. J.J. Kok Prof.drs. J. Moraal Prof.ir. J.E. Rijnsdorp

Prof.ir. R.G. Boiten Prof.ir. O.H. Bosgra

Dr. R.M. Cooke

Dr. R.M. Cooke heeft als begeleider in hoge mate bijgedragen aan het tot stand komen van het proefschrift. Het College van Dekanen heeft hem als

zodanig aangewezen.

ISBN 90-370-0001-0

(5)

STELLINGEN

behorende bij het proefschrift FAULT MANAGEMENT

1

Aangezien het proces van de evolutie en de vooruitgang van menselijke kennis vaak op fouten en toevalligheden zijn gebaseerd, zou men meer eerbied moeten opbrengen voor de fout.

2

Omdat alle processen van nature gedoemd zijn tot entropisch verval, is de fout een natuurlijk incident. Het bestrijden van de negatieve gevolgen van dergelijke fouten en het positieve gebruik ervan, bijvoorbeeld ter vermeer­ dering van kennis, is de anti-natuurlijke taak van de bedieningsman.

3

Van de eigenlijke fault management taak van de bedieningsman weet men niet hoe ze gedaan moet worden. De bedieningsman is als de joker, hij wordt daar in­ gezet waar een pasklare oplossing niet voorhanden is. Dit proefschrift.

4

Rationaliteit, hier gedefinieerd als het op correcte wijze omgaan met kansen, is een noodzakelijke voorwaarde voor de zinvolle toepassing van kennis en voor het uitbouwen van kennis. De eerste prioriteit is hiermee vastgelegd.

5

Praktijk ervaring verbetert niet de rationaliteit van de bedieningsmensen, maar vermeerdert slechts hun kennis. Dit proefschrift.

6

De waargenomen frequente irrationaliteit van bedieningsmensen inzake beslis­ singen in onzekerheid, voert tot de onvermijdelijke conclusie dat het inzet­ ten van deze mensen voor de omgang met storingen geen enkele garantie biedt voor succes.

7

De bedieningsman introduceert bij de omgang met storingen zoveel storingen dat het wellicht beter is zich te concentreren op fault management van fault management dan op fault management.

8

De gebleken onvolkomenheden in het fault management en de ontoereikendheid van de huidige technologie in het omgaan met storingen, moet een zwaarwegen­ de factor zijn bij het beslissen over het inzetten van hoog risico dragende systemen.

(6)

Het management van complexe industriële systemen dat de bedieningsman niet van een nutsfunctie over de mogelijke toestanden van het systeem voorziet, onttrekt zich aan een principiële verantwoordelijkheid. Dit proefschrift.

11

Human operator modellen behelzen gewoonlijk niets dan de beschrijving van een kunstmatige wijze waarop een gedefinieerde taak volbracht kan worden. Het is een misvatting te denken dat deze modellen ons veel leren omtrent de mensen waarnaar ze gemodelleerd heten te zijn. Dit proefschrift.

12

In de Schone Kunsten heet het orgineel model, in de Wetenschappen het beeld. Normatieve wetenschappelijke modellen vormen hierop een uitzondering, zij zijn een artificieel orgineel waarnaar het orgineel gemodelleerd wordt.

13

De taak van de fault manager kan aanzienlijk verlicht worden door een gedegen betrouwbaarheidsanalyse van het systeem. Zo zou het Nederlandse volk het be­ schamende voorbeeld van fault management vertoond bij het recente record van omvallende dominostenen bespaard zijn gebleven indien op grond van betrouwbaar-heidsoverwegingen meerdere aortae en by-passes waren aangelegd. Bovendien zou het wereldrecord dan ongeveer twee keer zo hoog zijn.

14

De ideeën van de evolutie theorie zijn behalve op het ontstaan van de soorten en het ontstaan van het denken, ook toepasbaar op het denken zelf en op de a r t e ­ facten welke het produkt zijn van het denken. Een van de jongste evolutiefasen is het verdringen van homo sapiens door homo chipiens.

15

Klassikaal blokfluitonderwijs doet vooral muzikaal begaafde kinderen voorgoed van musiceren afzien.

16

Wiens doel besturen is, is een slecht bestuurder.

17

Het opstellen van stellingen op bestelling is stellig aannemerswerk.

(7)

Alles is ijdelheid en najagen van wind.

[Prediker, _+ 250 v.Chr.)

(8)

PROLOGUE

Writing a Doctor's thesis is in some respects like making a long and demanding

journey: it is a fun adventure which is even more satisfying to look back upon

than to be engaged in. Like most journeys, the travelling was not done alone,

many persons joined a part of the voyage, and all companions contributed to the

completion of the trip.

Many of the following pages carry the fingerprints of the students Paul Dorrestijn,

Han van Asten, Paul Bolier, Johan Beumer, Joost van Eekhout, Bart van Rixel,

Jphan Gemen, Peter Lute, Else Riep, and above all Max Mendel, fingerprints

also of Roel Prins from the technical education centre where the experiments

were held, and finally the contributions of Wim Veldhuyzen, Ben Jaspers and

Jan Hasenack from TNO-IWECO were of considerable value.

This thesis is somewhat unusual in three respects. First, it does not focus on

a single aspect for study in depth; rather, it surveys a relatively wide range

of subjects. Second, it contains more repetitions than strictly necessary and

does not aspire to the usual level of information compression in a doctor's

thesis. Third, the style is somewhat free-wheeling, perhaps at times even

rambunctious.

The reasons for these divergencies are, respectively: (1) The uncultivated

character of fault management and the general wording of the research assign­

ment, required a broad orientation prior to focussing on a particular area. This

thesis is also the final report to TNO-IWECO. Therefore the topics covered must

also reflect the specific areas of concern for TNO-IWECO. (2) 1 choose to orga­

nize the thesis in such a way that also the readers who do not work through the

complete book are served: restricted reading of only introductions and conclusions

should give a reasonable global impression of the matters dealt with, and most

parts and chapters can be read separately. (3) To explain the divergency in style,

I must abandon the voyager-metaphor and appeal to the composer-methaphor.

Writing a doctor's thesis on fault management is in certain respects like composing

a piece of music. A musical composition is built around a theme, but it is much

more than merely a theme. The harmonic setting, the instrumentation, the trans­

positions, the inversions, the variations; these are the things which bring the

theme to life. I see the parts of this thesis as a short introit, a long praeludium

(9)

Prologue vii

con fantasia in which the theme reveals itself gradually, a triple fugue in which the theme diverges into three melodies (rationality, knowledge and training), and a finale in which the main moments of the piece are presented once again, without much embellishment this time.

I hope the reader can adapt to this somewhat unfamiliar style and structure, and will not be annoyed when a particular theme recurs in different variations. Although this is contrary to the recieved protocols for thesis prose, in this I have let myself be guided by the muse. In music one does not merely repeat a theme, one develops a theme in order to enhance appreciation of the theme's im­ plications within the field of possibilities in which it is embedded. Now fault management is a field in which the unstructured and quasi-structured possibilities vastly overwhelm the elements of hard disciplined knowledge. A discursive style designed for the transport of rigor is therefore sorely out of place. I have adopted a style designed to sensitize the reader to the nature of fault management, at least as I experience it.

(10)

SUMMARY

Contemporary complex industrial systems leave at least one important task for a great deal to the human operator: fault management, that is the coping with machine deficiencies. It is found that the core of fault management is to make decisions in the face of uncertainty.

The main problem dealt with in this thesis is the development of a thorough de­ scription of the fault management task and the creation of a normative framework which prescribes how the operator should cope with that task. Decision theory is followed and elaborated for application in technical fault management. It is shown that two principally different aspects determine the operator decisions: rationality and knowledge. Rationality prescribes the way one should use and process knowledge. A normative model is developed as a tool to check and judge the operator's decisions. Knowledge itself is shown to possess four relevant types, called: statistical, consequential, utility, and symptom knowledge. Two aspects of knowledge of uncertain items are shown to be essential: calibration and information. A method to measure and judge these knowledge items is pro­ posed.

A series of experiments, in which the actual operator behavior on relevant points of the normative model is tested, is reported. Operators appeared to act ir­ rationally very frequently, and experienced operators acted just as irrationally as their inexperienced colleagues. The knowledge of the group of experienced operators, however, appeared better than the novices' group knowledge, on both calibration as well as information. Only very few individual operators scored well in calibration and information; these are called experts.

Corollaries to the areas of training, task allocation and interfaces are made. Well established training methods are evaluated. It is shown that the required fidelity of simulators is counter-proportionate with the level of abstraction of the items to be trained. Furthermore, training appears often not to teach what it claims to teach, and training is sometimes counterproductive. Evidence is given that in particular statistical and utility knowledge are largely neglected in actual training and operator work surroundings. Current interfaces a r e shown to suggest a level of objectivity which is not realistic. It is indicated which aspects of the operator fault management task can be allocated to automatons, and which consequences this will have for the remaining operator work.

(11)

Samenvatting

SAMENVATTING

Het omgaan met storingen in het systeem, fault management, is een van de hoofd­ taken van de bedieningsman van hedendaagse complexe industriële systemen. Aan­ getoond wordt dat het nemen van beslissingen in onzekere situaties de kern van fault management is.

Het belangrijkste probleem dat in dit proefschrift behandeld wordt is het ontwik­ kelen van een adequate beschrijving van de fault management taak en de creatie van een normatief framewerk dat voorschrijft hoe de bedieningsman zich in het ideale geval van die taak zou moeten kwijten. Beslissingstheorie wordt gevolgd en aangepast voor toepassing in het technische fault management. Twee verschil­ lende zaken blijken de beslissingen te bepalen: rationalitieit en kennis. Rationa-litieit schrijft voor hoe men kennis moet gebruiken en verwerken; het ontwikkelde model is een gereedschap waarmede de beslissingen van de bedieningsman beoor­ deeld kunnen worden. Kennis blijkt in vier vormen van belang: statistische, con-sequentiele, utiliteits- en symptoom kennis. Aangetoond wordt dat twee aspecten van kennis omtrent onzekere zaken essentieel zijn: calibratie en informatie. Een reeks experimenten wordt belicht waarin het werkelijke gedrag van de be­ dieningsman op relevante punten van het normatieve model wordt getest. Het blijkt dat bedieningsmensen zeer vaak zondigen tegen de rationaliteit, en dat ervaren bedieningsmensen niet beter beslissen dan onervarenen. De kennis van de groep van ervarenen is echter wel beter, zowel op het punt van calibratie als op infor­ matie. Slechts weinige individuele bedieningsmensen scoren goed in calibratie en informatie; zij worden experts genoemd.

Gevolgtrekkingen worden gemaakt naar training, taakallocatie en interfaces. Enkele bekende methoden van training worden geëvalueerd. Aangetoond wordt dat het ge­ wenste niveau van de gelijkenis van de simulator met de werkelijkheid omgekeerd evenredig is met het abstractie niveau van de te trainen stof. Voorts blijkt dat trainingen regelmatig niet resulteren in wat wordt geclaimd; soms werkt de training zelfs negatief. Vooral statistische en utiliteitskennis blijken vaak stiefmoeder­ lijk bedeeld te worden in trainingen en dagelijkse werkomgevingen. Huidige inter­ faces blijken een niveau van objectiviteit te suggereren dat niet realistisch is. Aangeduid wordt welke aspecten van de fault management taak van de bedienings­ man aan automaten kunnen worden toegewezen; voorts wordt gewezen op de conse­ quenties die dit zal hebben op de resterende taak van de bedieningsman.

(12)

CONTENTS

Prologue v i Summary v' i Samenvatting 'x PART A - I N T R O D U C T I O N 1 I INTRODUCTION 1 1 Industry's Achilles heel 1

2 History and context 2 3 Initial goals 4 4 Provisional problem 4

PART B - P R E L I M I N A R I E S 7

II THE SCENERY 9 1 Fault management 9 2 Cognition, internal representation 10

1 General 10 2 Frames, Minski, 1975 12

3 Scripts, plans, goals, Schank and Abelson, 1977 13

4 Schemata, Norman and Bobrow, 1976 14

5 Conclusion 15 3 Notes on RISO concepts 16

1 Rasmussen's behavior classification 16

2 Diagnostic strategies 18 3 Taxonomy of mental representation 18

4 Aggregation 19 5 Flow representation 19

6 Conclusion 19 4 Production systems 20

1 General 20 2 Newell and Simon, 1972 21

3 Hunt, 1981 21 5 Conclusion 23 5 Fault-Symptom matrices 23

1 General 23 2 Bond and Rigney, 1966 24

3 Duncan c.s 25 4 Conclusion 25 6 Low and moderate-fidelity training 26

1 TASK, a low-fidelity training 26 2 FAULT, a moderate fidelity training 27

3 Conclusion 28 7 Promissory perspectives, tracks to follow 29

(13)

Contents XI

III TENTATIVE APPLICATIONS 31

1 Previous study 31 2 Data gathering and data analysis 33

1 Data and analysis 33 2 Sub-projects 34 3 Extension of data gathering 36

4 Conclusion 36

3 Training 37 1 General 37 2 Training and the SRK-model 38

3 Implementing Low moderate fidelity training 40

4 Conclusion 43 4 Fault-symptom matrices 43

1 Perspectives 43 2 IWECO-simulator-FSM 45

3 Conclusion 48 5 Theories and models 49

1 Empirical cycle of science 49 2 Theory and model continuum 52 3 Ordering keys of theories/models 53 4 Glancing at 'successful' models 55 5 Model man or machine? 57 6 Summary and conclusions 59

IV INTERMEZZO - PARADOXES PARADISE 61

1 Fault management's paranoia 61 2 The snag in 'successful' models 63 3 Global and qualitative models 65

4 Conclusion 66 V THE PROBLEM 68 1 Summary, necessities 68 2 The problem 69 PART C - A D E C I S I O N T H E O R E T I C A L A P P R O A C H VI NORMATIVE MODEL 72 1 Fault management is decision management 72

1 Fault management 72 2 Normative decision theory 74

3 Towards a normative fault management decision theory 78

2 Definitions, notations and postulates 79 1 Definitions and notations 79 2 Expectability and desirability 85

(14)

3 Optimal acts 91 1 Optimal acts 91 2 An example 93 3 Some remarks, alternative criteria 94

4 Observations 96 1 Principle of observations 96 2 Likelihood ratio 98 3 Optimal observations 99 4 Graphical representation 102 5 Observation heuristics 104 5 Rationality and knowledge 106

1 Rationality and knowledge 106 2 Types of knowledge 106 3 Aspects of rationality 110 6 Subjective knowledge; Calibration and information 111

1 Calibration and measurement methods 112

2 Methodological difficulties 114 3 Information and measurement methods 117

4 Expert resolution 123 7 Model for goal directed acts 124 8 Summary, discussion and conclusion 127

1 Summary 127 2 Discussion 128 3 Conclusion 130

VII EXPERIMENTS 132

1 Goal 132 2 Experimental design, subjects and hypotheses 132

3 Description of the experiments 135 1 Experiments related to probability 136

Relevance 136 Experiment 1 and 2, partition 136

Experiment 4 and 10, overlapping categories 139 Experiment 5 and 11, unreliable observation 141

2 Experiments related to utility 144

Relevance 144 Experiment 3 and 9, irrelevance of identical consq. 145

Experiment 8, transitivity and stability of choice 147 3 Experiments related to subjective knowledge 149

Relevance 149 Experiment 6 and 7, calibration and information 149

4 Overview and discussion of the results 154 1 Failure rates of operators 154 2 Technical experiments answered better? 155

3 Experience examined 156 4 Relation between calibration and information 158

5 Acceptability of the normative model 159 5 Conclusions . 160

1 Drawbacks on the conclusions 160

(15)

Contents X l l l

PART D - C O R O L L A R I E S 163

VHI TRAINING 163 1 What to train 164 2 Evaluation of some trainings 164

1 High fidelity training 167 2 Moderate and low fidelity training 167

3 Fault-Symptom matrices and lectures 172

3 Recommendations and conclusions 174

IX DISCUSSION AND CONCLUSIONS 178 1 Summary and main conclusions 178 2 Glancing back at initial goals 182

3 Future perspectives 184 4 Critique, final remarks 186

Epilogue 189

Index 190

References 195

Appendices 211 I Questions of the experiments 211

II Results of the experiments 225 1 Results on nominal measurement level 225

2 Results on ordinal measurement level 231

HI Evaluation of the training TASK 236 IV Supervisory control analysis form 241

(16)
(17)

Industry's Achilles heel 1

PART A, CHAPTER I

INTRODUCTION

1.1 Industry's Achilles heel

Coping with equipment failure is increasingly becoming a main operator task. Due to ongoing automation most of the conventional operator tasks are nowadays allo­ cated to machines; the operator only supervises the nearly autonomous system. It can be said that today's operator tasks are restricted to that fuzzy area for which as yet no machines have been built; operator tasks are defined by shortcomings of the designers and not by the talents of human operators.

Particular important fields which are still left for the human operator are those area's which demand flexibility, intelligence and improvisation. One of these is coping with system deficiencies: fault management.

The above situation often results in rather peculiar operator jobs. The operator has to pick up the bits and pieces left. Bibby et al. [1975] characterized the operator job as: 99% boredom and 1% terror. The operator is normally busy with quiet optimization of the normal operation: process tuning, fuel saving, some maintenance work. Or, he is only stand-by, reading a book or so. These so-called stationary supervision tasks do not require a very thorough knowledge of the system. The tasks are mainly routine procedures which imply low responsibil­ ities. However, 1% of the task is completely the opposite: if the system fails, the operator has an extremely difficult and responsible position. If he makes a bad judgement, the whole plant might be lost, or even worse. Therefore: today's operator job is a rather schizophrenic one. The two main sides of it are coun­ terparts that do not match. Even worse: characters well suited to one side of the job will probably not be suited for the other side! Thus, the operator and his job might be the 'Achilles' heel', of the design and operation of contem­ porary complex industrial systems.

Many publications draw attention to the situation sketched in the previous para­ graph. The usual conclusion is a call for either (1) better and more operator

(18)

training or (2) for better and more systems and displays to assist the operator with his job [Rasmussen and Lind, 1981].

Both these 'solutions', however, presuppose explicit knowledge of the essentials of fault management tasks and its specific difficulties. Unfortunately only bits and pieces of this knowledge are available. The work reported on in this thesis aims, in the end, to contribute to this knowledge.

1.2 History and context

Some attention should be devoted to the history and the context of the project on fault management reported here.

After a period of concentration on manual control problems, the Man-Machine-Systems Group of the Laboratory for Measurement and Control of the Deift University of Technology turned attention towards supervisory control. At first the stationary supervision was focal. This is the supervision of systems under normal conditions. Results in this area were, among others, the control and decision model of the human operator of slowly responding complex systems by Kok and Van Wijk [1978], further results are discussed in [Van Lunteren, 1983, ch. 3,4,5]. The operator's supervisory behavior in situations of equipment failure, however, has only been worked at from 1980 onwards.

The desire to get involved in the fault management area led to an invitation to William B. Rouse to be a visiting professor during the period Sept. 1979 - Aug. 1980. Due to years of experience, in particular with fault management training, much expertise is embodied in Rouse and his group, presently at Georgia Institute of Technology, Atlanta, U.S.A.. Rouse's primary responsibilities in Delft involved starting new projects on fault management. The project reported in this thesis was started after a survey conducted under Rouse's direction (paragraph III.l). Late 1979 connections with the Netherlands Organization for Applied Scientific Research (TNO), in particular its Institute for Mechanical Engineering (IWECO), were renewed. In 1976 Veldhuyzen completed his manual control study on helms­ man's ship-steering performance [Veldhuyzen, 1976]. This study used the ship-navi­ gation simulator of IWECO. IWECO also built, together with EXXON, a supertanker steam propulsion plant simulator. This simulator is used as a training tool in

(19)

History and Context 3

refresher courses for experienced ship engineers. Figure 1.1 shows some trainees working at the main panel of the simulator. Coping with equipment failure is an important feature in these courses. More details concerning this simulator and the training are, for example, given by Jaspers and Hanley [1979] and Jaspers [1982]. In this thesis, the 1WECO-EXXON simulator is mostly called TNO-IWECO simulator. It is obvious that TNO-TNO-IWECO has an interest in acquir­ ing more knowledge on human supervisory behavior; more knowledge enables to improve their simulator trainings. So, a new cooperation started: the fault man­ agement project which is reported in this study.

Figure 1.1. Main panel of the supertanker steam propulsion plant simulator built by IWECO-EXXON and runned by IWECO.

The scope and context of this study are herewith indicated. The scope encompasses the area of fault management in complex industrial systems and the specific con­ text is the world of a ship's engineer.

(20)

1.3 Initial goals

As described above, the availability of a high-fidelity simulator of a complex ship propulsion plant together with the experienced ship engineers who visit the simulator as trainees, and the intent to study human fault management behavior resulted into the research project reported here. The initial goals of the project were:

1. To obtain a better insight into human supervisory behavior, especially in fault management tasks.

2. To gain a method to develop and to evaluate fault management training programs.

3. To find objective criteria to assess the effectiveness of simulator fault . management training, and to assess the transfer of simulator skills into

practice.

4. To recommend operator friendly task allocations between man and machine, and to design operator friendly interfaces; both again seen from the fault management point of view.

Obviously, the first mentioned aim is the hinge around which the other aims r e ­ volve. Gaining a better insight requires knowledge of: (1) the essentials of fault management tasks, (2) the actual fault management behavior, (3) the desired 'ideal' behavior and (4) many psychological issues on human knowledge and information processing, which underlie the behavior.

1.4 Provisional problem

The first step towards a possible satisfaction of the project's goals was, of course, an investigation of the relevant literature; an expedition through the scenery of fault management. Which topics should be covered before answers to the questions posed might be given? How are these topics interrelated?

(21)

Provisional problem 5

Central items are the actual fault management behavior and the desired ideal behavior. Firstly the actual behavior, sometimes called Actual Action Sequence hereafter. What measurement methods are available to extract data from the ac­ tual operator behavior during the training? Which data are crucial? Secondly, how to construct the desired behavior, the Required Action Sequence? This requires knowledge of human abilities/limitations and knowledge of machine demands. In short, which tasks does the machine require from the operator, and in what way can the operator cope with them best?

Before formulating any answers to these questions, it seems wise to concentrate on knowledge about man's mental processes, thinking, mental representation, cognition on the one hand, and on the area of task specification, task analysis on the other. If the actual and the required action sequence can be established, comparison of these two will indicate something about the competence of the operator, which in turn is an important input in the design, execution and evaluation of training. The areas of preliminary investigation therefore include: cognition and mental r e p r e s e n t a t i o n , task analysis, task specification, measurement methods and training. All these, of course, should be looked at from the fault management point of view.

Part B, the next part of this study, contains four chapters which report on the findings of our preliminary investigations. Chapter II gives a view into the scenery of fault management. The important items of fault management are dis­ cussed briefly, and some relevant literature in the areas cognition/mental rep­ resentation, training and some modelling are elucidated, resulting in the recom­ mendation of some tracks to continue. Chapter III reports on the experiences along these tracks and on the areas task analysis/specification and measurement methods; experiences leading towards recognition of some inherent pitfalls in fault management research (chapter IV).

The preliminary part will show the existence of a crucial vacuum in the fault man­ agement discipline. It is the lack of a generally accepted, fundamental approach towards the desired, ideal behavior in fault management situations. There exists no normative framework as to how the operator is supposed to act. The diverse preliminary tracks converge towards the statement of a more exact and limited problem in chapter V: the creation of a normative skeleton of fault management.

(22)

furthermore, on a series of experiments in which the actual behavior on crucial points of the normative theory is examined (chapter VII).

Part D gives corollaries based on the previous findings, especially with respect to training (chapter VIII), the initial goals of the project and future perspec­ tives (chapter IX). Finally a summary, the main conclusions and some final remarks are given.

(23)

Preliminaries 7

P A R T B

PRELIMINARIES

This part contains four chapters on the elaborate work which lead to marking out the crucial problem of fault management as we see it. It is therefore called 'preliminaries'. The main treatise of this thesis (Part C, chapters VI and VII) basically can be understood without the items elucidated in this preliminary part, but the items are of interest for at least two reasons: (1) to establish the place and relevancy of the main work, and (2) because some results on side-tracks are worth reporting.

The first chapter of this part, chapter II, called The Scenery, will give a brief anthology and examination of relevant literature. Goal hereof is to base our final problem and to outline fault management work by others which will come into a different perspective by our findings. It has not at all the pretension of being a more or less complete literature review. See therefore for example Rouse's paper 'Models of Human Problem Solving: Detection, Diagnosis and Compensation for System Failures' [1983]. Firstly, fault man­ agement is analyzed in some detail. The second part of the chapter is giving an introduction into the way some significant researchers in the cognitive field attack the problem of knowledge and the use of it. Attention is further given to the work of Rasmussen and his man-machine research group. In the fourth place, the modelling principle of production systems is treated, followed by a presentation of the Fault-Symptom matrices approach. The chapter continues with a treatise of the work of Rouse's group in the fault management training scene. A summary of items out of literature which look promising for our project con­ cludes the chapter.

The third chapter, Tentative Applications, gives a brief report on our experiences while trying to follow some recommendations from the literature. It concerns attempts to find methods to obtain data from a subject's performance, to apply special fault management training methods in our context and it is about the use of Fault-Symptom matrices. If we had to set the results of these tentative

(24)

applications to music, it would surely be in a minor key and the tones would be dissonant, however, the dissonance would thrust in a direction in which a solution may be found. Some general remarks on science, theories and models, including an analysis of so-called successful models in the area of station­ ary supervision, form the transition towards the next chapter.

The fourth chapter demonstrates that fault management possesses some peculiar characteristics which make this area particularly- tricky; much of the work is balancing on the edge of paradox! This thesis will appear to be contaminated as well.

(25)

Fault management 9

CHAPTER II

THE SCENERY

II. 1 Fault management

In fault management or coping with system failures, four phases are commonly distinguished [Thijs, 1983a, 1983b]:

1. Detection, 2. Compensation, 3. Diagnosis, 4. Correction.

Firstly, the operator has to become aware of the potential presence of a fault: DETECTION. This may be done because the operator notices that the system is not acting in conformity with his expectation, or because an alarm points at an undesired state of the system. In both cases the operator normally detects only the results of a failure, not the failure itself. The operator is confronted with the symptoms, but the cause, the disease itself is still obscure. After detection that something is wrong and which undesired results are to be expected, the operator will try to COMPENSATE for these unwanted results. He will take defensive measures against the existing and/or expected unwanted results: remedying symptoms. The third phase will be inference of the cause, establish­ ing the disease departing from the symptoms and using any other knowledge: DIAGNOSIS. Once having found the cause, the failure, it can be removed by replacing or repairing the faulty element: CORRECTION.

The correction aspect is usually the domain of maintenance people, it does not belong to the fault management task of the control room operator. Therefore, correction will not be given any further attention in this report.

The other phases of fault management are often not too clearly separated in reality; in particular the compensation and diagnosis phases may mingle. Mostly, an operator already starts diagnosing while still being very busy with compensative measures in order to stabilize the system. Sometimes diagnosis precedes

(26)

compensa-tion. Sometimes the operator does not manage to pursue all stages, being caught in too big an entanglement. Nevertheless, the most common sequence is as de­ scribed: (some) compensatory actions are taken before diagnosis establishes the fault, whether this may violate the layman's expectation or not.

This outline of today's fault management task in complex industrial systems must also confess that the operator task is mostly not clearly defined. The task is often given in vague terms like: "Try to keep the system in the optimal working point despite eventual system failures". The way to perform the task is mostly even more obscure: no strategy or theory to sustain the operator decisions is available. Note that pre-specified procedures like those used in aviation do not belong to our scope of interest; we are focussing at inherent unforseen, uncertain situations (a computer is far better in following pre-set procedures anyway). The operator has to cope, without explicit and adequate tools, with many uncertain­ ties; he has to deal with a system which changed its behavior and possibly even its structure by the unknown failure in an unknown way!

II.2 Cognition, internal representation

II.2.1 General

In the last two decades psychology changed its major attention from the behav-ioristic stimulus-response (S-R) approach towards the underlying cognitive pro­ cesses which are responsible for that behavior. Fundamentally, cognitive psychol­ ogy, sometimes called information processing psychology, is the study of knowledge and how people use it. (Of course cognitive processes are still often studied by means of S-R type experiments as a research vehicle.) This general shift is such a dramatic one that some authors, for example Umbers [1981], are labeling the change as a paradigm switch in the sense of Kuhn's scientific revolutions [Kuhn, 1962].

The classical sequential stage view on human cognition, see figure II. 1, i.e. perception, sensory buffer, short term memory and long term memory, is no longer

(27)

Cognition, internal representation 11

accepted by the majority [Glass et al., 1979]. Perception and sensory buffers seemed to blur together, as did short and long term memory. Information processing often appeared to be parallel, not strictly serial. Long term memory showed not to be a passive store of information, but to be able to generate autonomously answers to particular questions.

THE SENSORY SYSTEMS THE MEMORY SYSTEMS

Figure II. 1. Sequential stage model of perception. From: Lindsey and Norman, 1977, page 304.

A remark, addressed to readers trained in engineering rather than a social science, may be not out of place. The figures borrowed from the literature are not to be taken as something like block or flow diagrams. Conventions in psychology and engineering do differ!

The study of cognition is rooted at the border between psychology and artificial intelligence. One small step further towards computer science, and the computer metaphors of human thinking emerge. But the isomorphism between the actual cogni­ tive processes and the computer analogon models of human information processing, i.e. human cognition depicted by a set of blocks frequently used to describe computer configurations, appears not to be very high either (see for example figure II.2). As mentioned above: the brain has possibilities to process information in p a r a l l e l . Sequential time-sharing-like aspects are, however, certainly also recognizable. Computer metaphor models differ at least at the following points from human information processing. The brain is considered not to possess a separate CPU-like working area and a passive memory. The storage of the brain is probably not organized and labeled by place, but by content. The contents, semantics of stored information itself provide somehow a key in the retrieval

(28)

process, the so-called Content Addressable Memory [Shiffrin and Atkinson, 1969]. Different domains of memorized items, for example visual and verbal, do have their own well-known specific areas in the brain. But this concerns the macro-organi­ zation. At the microlevel, contents are presently thought to be prevailing. For short, cognition is focussing at the representation and processing of human knowledge. In the following sections some theories/models in this area will be elucidated. One central aspect is always the tradeoff between storage and compu­ tation. Literal representations require very little computational effort but vast amounts of storage because every single situation requires a separate represen­ tation. On the other hand, very general (often abstract) representations require lots of computation in order to become applicable to any specific situation. There seems to be a consensus of opinion between the psychologists that: (1) cognitive processing capacity is very limited and (2) mental representation is extremely extensive.

Let us glance at some examples of existing models in this field.

Environment

Receptors

Effectors

IPS

Processor

Figure II.2. Information processing model. From: Newell and Simon, 1972, page 20.

II.2.2 Frames [Minski, 1975]

Perhaps the most often quoted psychological concept in our branch of man-machine systems literature is the 'frame' idea of Marvin Minski [1975]. Although his original work was about the abstract symbolic representation designed to cope with movement in and reasoning about three dimensional space, it is of far wider application. To summarize the essence of the theory we best quote Minski himself (page 212): "When one encounters a new situation (or makes a substantial change

(29)

Cognition, internal representation 13

in one's view of the present problem) one selects from memory a substantial struc­ ture called a frame. This remembered framework has to be adapted to fit reality by changing details as necessary. A frame is a data-structure for representing a stereotyped situation, like being in a certain kind of living room, or going to a child's birthday party. Attached to each frame are several kinds of infor­ mation. Some of this information is about how to use the frame. Some is about what one can expect to happen next. Some is about what to do if these expectations are or are not confirmed."

Frames contain only the essence of situations. Additional information from the environment will fill up the details if the specific situation requires a de­ tailed frame. Collections of frames are linked together into frame systems. The frame systems are linked, in their turn, by an information retrieval network. If a proposed frame cannot be made to fit reality, this network provides a replace­ ment frame through the retrieval network. As long as nothing extraordinary appears, the recognition of a frame automatically triggers the appropriate action. A phenomenon like surprise is the result of a feature appearing in a situation when the frame does not include it or its possibility of occurrence. In our opinion, one aspect of humor may very well be analyzed in the terms of frames: a frame is purposely conjured up, but the situation appears all at the sudden to refer to a completely different and unexpected frame, which makes people laugh.

Many other researchers launched concepts similar to Minski's frames; some of whom were earlier. The next sections will mention a couple of them.

11.2.3 Scripts, plans, goals [Schank and Abelson, 1977]

The script idea is much alike Minski's frame concept, but scripts concentrate on actions, processes rather than situations. Scripts are standardized generalized episodes (personal experiences, repeated similar sequences). Plans are made up of general information about how actors achieve goals. Plans describe sets of choices that a person has when he sets out to accomplish a goal. Plans link series of scripts to goals. Goals trigger the use of familiar plans which in turn cause the execution of well practiced scripts.

(30)

memory is primarily organized in a semantic or episodic way is controversial [Loftus and Loftus, 1976], (Semantic memory for abstract semantic categories, words, is organized in a hierarchical way using class membership as a basic link, for example trees like: canary-bird-animal etc.)

II.2.4 Schemata [Norman and Bobrow, 1976]

Another example of the direction in which the thoughts of many scientists drifted can be found in Norman and Bobrow [1976]. They also propose an active memory in which an important part of the central processing is done unconsciously, see figure II.3.

Phvsical sijjnals

Figure II.3. Memory schemata view of the human information processing system. From: Norman and Bobrow, 1976, page 118.

Experience created a vast repertoire of structural Schemata that can be used to characterize knowledge acquired by any experience. Incoming data and higher order conceptual structures all operate together to activate the memory schemata.

(31)

Cognition, internal representation 15

The schemata contain at least (1) a global description to which the incoming data should fit, sort of templates, (2) rules for search with other schemata for describing more details, sub-templates and (3) rules for passing through information and/or instructions to the central area, called 'communication and decision making'. The schemata work autonomously and simultaneously to do both: analyze the incoming data and make suggestions to one another about possible interpretations. The schemata either add their results to the common data pool, making them available to other schemata again, or they communicate directly by referring to each other through context dependent descriptions. Thus the system is both data driven and conceptually driven. Short term memory consists of those schemata that are undergoing active processing. There is not exclusively a set of sequential stages (classical information processing point of view).

Contents of consciousness and conscious information treatment are, however, not clearly described by Norman and Bobrow. Presumably they consider the 'communi­ cation and decision making' area to be a kind of conscious working area. Whether this area coincides with that what they call short term memory is unclear. So far Norman and Bobrow's model.

Similar concepts are for example the 'cognitive specialists' of Hayes Roth and Hayes Roth [1979]. Their cognitive specialists are in their turn comparable to Selfridge's 'demons' [1959], which are specialized in making sub-decisions. (For example deciding how much horizontal line-fractions are recognizable in a writ­ ten character in order to determine the written letter.) Selfridge's model of mind contained legion of these demons: the pandemonium. The cognitive special­ ists are something like the schemata which record their decisions in a common data structure, here called the blackboard. By doing so, these decisions become available to other specialists e t c . . The blackboard is divided into several planes containing conceptually different categories of decisions.

II.2.5 Conclusion

The survey through the suburbs of cognition only delivered these kind of general, vague, be it sometimes elegant, thinking models. Many detailed specific models can be found, but our efforts to find an overall framework of cognition to use

(32)

as a base, only the kind of models as mentioned above were found. These models are too vague to be of direct use in our fault management problems.

II.3 Notes on RISO concepts

Rasmussen and his co-workers shaped their own experience together with some general ideas borrowed from psychology into concepts which are more directly appealing to man-machine workers. Several papers, which partly overlap each other, are of special interest to our work. It must be noted that the Rasmussen discussed here is not the Rasmussen of the disputed 'Rasmussen Report' on risk of Nuclear Power Plants.

II.3.1 Rasmussen's behavior classification, the SRK-model

Rasmussen distinguishes three categories of human behavior in controlling or supervising tasks: Skill-based, Rule-based and Knowledge-based behavior, which will called the SRK-model [Rasmussen, 1980, 1983]. These categories are depicted in figure 11.4.

The figure shows a scheme of the major ways in which sensory inputs may result in actions. Again engineers be warned: although the figure resembles a control theoretic block diagram, it is no such thing; it is an optical representation of a still relative fuzzy psychological view on human behavior.

SkiM-based behavior

The lowest level, skill-based behavior, is the area of automated or nearly auto­ mated actions like walking, bicycle riding, breathing, simple assembly-line actions, and so on. They require little or no conscious attention and effort. For an experienced operator, using tools and reading gauges will fall into this category.

Rule-based behavior

(33)

Notes on RISO concepts 17 KNOWLEDGE - BASED BEHAVIOUR IOENTI -F IC AT ION R U L E - B A S E D BEHAVIOUR RECOG­ NITION ASSOCIA­ TION STORED RULES FOR TASKS SKILL-BASED BEHAVIOUR FEATURE FORMATION

TTTT

SENSORY INPUT AUTOMATED - * ■ SENSOR!-MOTOR PATTERNS

TTTT

Figure II.4. Rasmussen's three level behavior classification. From: Rasmussen, 1983.

required. Whereas the skill-based behavior is typical for extremely frequent tasks, the rule based behavior is typical for less frequent tasks in a familiar work environment like more complex assembly-line actions and emergency procedures in an aircraft. Rule-based behavior concerns pre-specified, but not necessarily formalized, actions. The rules underlying the behavior can be empirically derived from trial and error, they can be formed by causal reasoning, or they can be prescribed as formal work instructions. Situations or states are, provided that they are recognised, directly mapped on, or associated with specific actions. Both, situation and action will be conscious, but the way in which a situation is recognized and how the situation is mapped on an action, the association rule is often not.

Knowledge-based behavior

The third, and highest level of behavior in terms of mental effort on the opera­ tor's part is the knowledge-based level. Quoting Rasmussen [1980]: "This is the level of intelligent problem solving which should be the prominent reason for the presence of human operators in an automatic plant. Behavior in this domain is activated in response to unfamiliar demands from the system. The structure of the activity is an evaluation of the situation and planning of a proper sequence of actions to pursue the goal, and it depends upon fundamental knowledge on the processes, functions and anatomical structure of the system". Knowledge-based

(34)

behavior involves higher-level thinking, typically using fundamental principles and knowledge to deduce and/or to infer which actions should be taken. In this behavior area there are normally no pre-specified guidelines. All the stages depicted in figure II.4 at the knowledge-based level, have to be given conscious attention.

All three behavior levels may be used in all phases of fault management. The emphasis, however, is the following: detection is mostly done at skill-based level, compensation at the rule-based level and diagnosis at the knowledge-based level.

Some aspects of this three level behavior classification, aspects which have close relationship with training and verbalization are elucidated in some detail in the sections III.2 and III.3.2, respectively.

II.3.2 Diagnostic strategies

Two main ways can be distinguished in operator's diagnostic strategies: symptomatic and topographic search [Rasmussen, 1981]. Symptomatic search is a pattern recogni­ tion approach. Familiar patterns of symptoms directly refer to a situation/failure. Mostly these familiar situations are directly associated with some action: rule-based behavior. Topographic search, on the other hand, requires conscious analysis using topographic features of the internal structure of the system while estab­ lishing the situation/state/failure: knowledge-based behavior.

Here Rasmussen actually fills out what kind of rule- and knowledge-based behavior is distinguishable in the act of diagnosing system failures.

II.3.3 Taxonomy of mental representation

Rasmussen followed today's general interest of psychology towards cognition. He examined what kind of mental representations of reality may be available in the operator's mind [Rasmussen, 1979]. He describes an appealing hierarchical set of representations of technical systems. He herewith translates, in fact, the

(35)

Notes on R1SO concepts 19

psychologist's concept of human simultaneous mental representation at different levels of abstraction into a technical environment. By far the most attention is, of course, devoted to mental representation at the knowledge-based level.

II.3.4 Aggregation

Different tricks by which the operators cope with complexity are described by Rasmussen and Lind [1981]. A major trick is called aggregation. It concerns adaptation of the resolution of the mental representation to the nature of a problem. This is changing the mental representation in order to make the problem fit; the solution of a difficult problem appears often trivial, provided the proper representation is found! Aggregation is often compared with zooming-out or reversely zooming-in, the so-called decomposition, on an area. These phenomena are recognizable at all three behavior levels.

II.3.5 Flow representation

Finally also the work on operator assisting displays (section 1.1) should be mentioned [Lind, 1981; Lind 1982]. At a high level of abstraction in Rasmussen's taxonomy mass and energy flow representations are mentioned. Riso workers created a computer generated display in which this type of representation is visually presented. The general idea is that the mass flow and energy flow at a high abstraction level are the most important items of which the operator should be and should stay aware.

11.3.6 Conclusion

In general the work of Rasmussen looks conceptually rich, the models seem to be widely applicable. This is, in fact, also what is aimed for. Rasmussen [1983] himself pleads explicitly to focus more on creating of what he calls qualitative models: ". . . which can guide overall design of the system structure including, for example, a set of display formats, while selective, quantitative models can be used to optimize detailed designs." The models can be used as a guide in the

(36)

thinking on the operator's behavior. In fact they are similar to the general psychological models mentioned in section II.2 before, but Rasmussen gives the schemata, frames, e t c . a technical content. Rasmussen's models a r e , however, mostly not substantiated by hard data. Rasmussen manages to make his theories very plausible, but they do possess quite a few speculative elements. Familiar­ ization with Rasmussen's models certainly induces the feeling of a better under­ standing of the human operator, but did not help us much in attempts to solve practical problems; the models are far off from the pragmatism engineers like.

II.4 Production systems

II.4.1 General

An often used principle by which the human information processing is modelled is the so called production system, consisting of rules of the format IF<situation> THEN<action> (figure II.5). The situation side is a list to watch for and the action side is a list of things to do. A vast amount of artificial intelligence programs is based on the principle of production systems. This principle is a pragmatic engineers approach, but will show to have its own specific drawback. We will mention a few examples from the literature where this approach was used.

PRODUCTIONS ituation>THEN<action>

SYMBOLS

;; Environment

Figure II.5. The production system principle. LTM = Long term memory; STM = Short term memory.

From: Newell, 1973. LTM

IF<si

(37)

Production systems 21

11,4.2 Newell and Simon, 1972

The most widely known example of the application of the production system principle is surely the 'General Problem Solver' of Newell and Simon [1972]. Although Newell and Simon managed to model in detail relatively complex tasks such as chess and the often quoted cryptarithmetic puzzle 'Donald + Gerald = Robert', they hardly ever manage to escape from these specific tasks. A useful general point (re-)stressed by them is the distinction between the algorithmic and heuristic way of thinking. Algorithmic being systematic thinking comprising logic propositions, deduction and such; a sequence of rules which, when followed precisely, automatically results in a correct answer. Heuristic thinking being the 'feeling', or rules of thumb which help to adopt a suitable strategy of algorithms; it belongs to the domain of creative thinking and is mostly not explicit.

Newell and Simon analyze the human behavior while performing specific tasks into fine-grid and complex details, but even they themselves do admit that their findings perhaps tell more about the task being performed than about the human performing it.

II.4.3 Hunt, 1981

One production system model shall be devoted special attention here, because it focusses explicitly on fault management, in particular at the diagnosis aspect. It is developed by Hunt in his Ph.D. thesis [1981], and published later in Kyoto and IEEE [Rouse and Hunt, 1981; Hunt and Rouse, 1982]. Hunt des­ cribes the human operator diagnostic behavior by two sets of production rules. A set of symptomatic rules and a set of topographic rules (see section II.3.2). The selection of the rule to apply is made dependent of four attributes: recallability, applicability, usefulness and simplicity. The relative weight of each attribute in its class is measured with the help of fuzzy set theory. The rule chosen is, in this model, the one with the largest minimum value among the four attributes; the maximum membership value in the fuzzy intersection of the four sets decides the rule to be applied.

(38)

Problem Consider State Information / A n y N . Familiar __>££-\ P a f t e m & / [No Consider Structural Information Apply Appropriate S-Rule Apply Appropriate T-Rule

Figure II.6. Hunt's diagnostic model structure. From: Hunt, 1981, page 51.

Hunt applied this model structure only for a simple and completely explicit task, the computer training method called FAULT which will be discussed in section II.6. Within this task, the model selected in 50% of the cases the same actions/observa­ tions as the subjects did (out of 23 rules); the choice of action/observation of model and subjects was in 70% of the examined decisions either equal or similar. Examination of the above depicted model structure shows that only the association of state —> action is modelled; the pattern recognition itself is omitted!

Rouse [1983] tried to apply Hunt's basic model structure which, by the way, equals the rule-based part of the SRK-model of Rasmussen, to different area's of fault management, i.e. (1) Execution and Monitoring, (2) Planning and (3) Recognition and Classification. Rouse claims to give herewith an overall prob­ lem solving model, a synthesis of the state of the art, so far. In particular, the possibility of the model to make errors and to recover from them is advertised to be a strong quality.

In essence this model is, however, only a part of the SRK-model of Rasmussen; namely the rule-based branch presented in an alternative way. Furthermore, it is speculative. Apart from Hunt's experiments it is, to our knowledge, not evaluated, and these experiments showed even in a special and simple task, to give a rel­ atively poor description of the human operator behavior.

(39)

Production systems 23

To mention some more examples out of the vast amount of literature: Goldstein and Grimson [1977] included elements of skill acquisition in relation to attitude instrument flying in their production system approach. Wesson [1977] made a model of the traffic controller using a rank ordered set of production rules which are situation dependent. Doering and Knaeuper [1983] described the pilot behavior in terms of the situation-action characteristic using a normative approach. Further examples: Shortliffe [1976], Duda [1976], Young [1979], Reiter [1980], and many others.

II.4.4 Conclusion

All investigated models have the characteristic of being very task specific in common. Only if the task is analyzed into detail, a model by means of production systems may be constructed. Due to the human adaptability and flexibility he will adapt to the specific task, and the proposed models become highly task dependent. Therefore, the technique of production systems seems not to be fitting to the fault management problem as has been set; the task is not precisely known in the area of decisions under uncertainty.

II.5 Fault-Symptom matrices

II.5.1 General

The relation between faults, machine deficiencies, and symptoms, observable results of the deficiencies, may be represented in an so-called Fault-Symptom matrix (figure II.7). A symptom consists out of a set of elements, called the generating symptoms. A generating symptom is for example a high voltage or a normal pres­ sure reading; a fault may be a broken pump engine or a loose electrical connec­ tion, e t c . . The rows in this matrix indicate the values of the different generating symptoms connected to a certain failure, a complete row is called symptom; the columns indicate the values of a certain symptom aspect in connection with dif­ ferent failures. In the example of figure II.7, the symptom aspects may take five

(40)

values, which is arbitrary. There will be generating symptoms existing which can only take two values, like a control lamp which will either be on or off. Examples of symptom aspects are: voltage gauge, pressure gauge, alarm lamp, e t c . . Although this manner of representation seems to have the potential to be very helpful in symptomatic diagnosis (section II.4.2), and for example in the estimation of an optimal search strategy, it is not often mentioned in literature. Two applications found in literature will be mentioned here.

fault no 1 2 3 4 symptom aspect no 1 2 3 4 generating symptoms ++ + -0 -0 0 . + + + + 0 + + ++ + 0 -— = = = = = very high high normal low very low

Figure II.7. Fault-Symptom matrix.

II.5.2 Bond and Rigney, 1966

A fine example of usage of the Fault-Symptom principle is found with Bond and Rigney [1966], They use a certain variant and call it a Symptom-Mai function matrix. Each subject examined by them had to fill the cells {i,j} of the matrix with their personal, subjective, conditional probabilities P(Sj|H;) for a certain electronic circuit. Here S; stands for symptom aspect i, and Hj stands for fault j . The symptom aspects were three-valued: high-normal-low. The failure probability of the different components was supposed to be uniformely distrib­ uted. Of special interest was whether the subjects acted in a Bayesian way in their sequence of tests and their component replacements. The result was (1) that only 50% of the replacements were predictable in a Bayesian way, (2) initial subjective matrices of the subjects differed significantly, (3) persons with a better initial matrix (= closer to the "objective" reality) performed more in a Bayesian way, and, last but not least, to quote Bond and Rigney: "Elimination of a few 'popular' erroneous errors from each P(Sj| Hj)-matrix would have made the proportion of Bayesian solutions very high indeed". The authors recommend

(41)

Fault-Symptom matrices 25

this matrix as a training criterion.

11.5.3 Duncan c.s.

The symptomatic aspect of diagnosis with the Fault-Symptom matrix as a base has been given attention by the group of Duncan and Shepherd, formerly at Hull University, England. They always looked upon it from a training point of view. A series of publications was based at experiments around a static projection, by means of slides, of a particular conventional panel of a chemical plant. The different slides showed the symptoms, gauges, of different system failures. Rec­ ognition of symptom patterns improved considerably with training. General diag­ nostic rules were established by means of the Fault-Symptom matrix: observe those symptom aspects which are highly discriminating between possible failures first; in the simple example of figure II.7 this is of course symptom aspect number one. Training of these general diagnostic rules appeared to be very helpful, also in cases of - for the subjects - unknown, not previously trained failures [Shepherd et al., 1977; Duncan et al., 1975]. This conclusion is, of course, only valid within the area of investigation: the specific, not very complex chemical plant component which was only statically exposed, and judged. The research­ ers, however, use wrongly a much more general tone in their findings; they seem to extrapolate inadmissibly far. No method for recording the Fault-Symptom matrix is mentioned in any publication examined.

II.5.4 Conclusion

Apparently, the Fault-Symptom matrix approach provides a clear and helpful means in analyzing the characteristics of the system, and it can be fruitfully used in training. Note that the Fault-Symptom approach as used by Bond and Rigney can be categorized as knowledge-based behavior, but that Duncan's work tends towards the rule-based side (section II.3.1). The plain recognition of symptom patterns is even a skill-based item. The diagnostic strategies used are mainly symptom-a t i c symptom-a l . A further investigsymptom-ation seems justifisymptom-able, but before doing so, the efforts of Duncan c.s. lead us to training specifically which will be devoted attention in the next section.

(42)

II.6 Low and moderate-fidelity training

As explained in the section concerning the SRK-model of Rasmussen (II..3.1), the essential mental actions belonging to fault management tasks are supposed to be knowledge-based. In order to train these essentials, it might be far better to use a non-high fidelity training aid (specific training!). Much of the efforts of the Man-Machine-Systems group of Rouse were devoted towards developing of and working with such non-high fidelity training programs.

II.6.1 TASK, a low-fidelity training

Rouse c.s. created some computer training methods which are supposed to teach operators some basic diagnostic methods. First of all, the essence of topographic search (section II.3.2) was transformed into a sort of computer game [Rouse, 1978).

* * * * * * 22,30 = 23,30 = 30,38 = 31,38 = 24,31 = 25,31 = FAILURE 1 1 1 0 1 1 9 31 RIGHT !

Figure II.8. An example of the low-fidelity training method called TASK. The faulty element is no. 31, from: Rouse, 1978.

The idea is that a system always consists of a network of elements. If an element fails, all elements which a r e downstream of it will show a distorted output, see figure II.8. The stream direction is indicated by arrows. The output of the terminal elements of the network will be displayed: right (1) or wrong (0). The

(43)

Low- and moderate-fidelity training 27

troubleshooter may ask the. system whether the stream between any couple of con­ nected elements is right or wrong (see information in the upper left corner of figure II.8). The art of trouble-shooting is to find the faulty element as quickly as possible. Only one element will fail at a time, and the network structure as well as the failing element are randomly generated for each task. This low-fidelity training method was called TASK. The item trained is claimed to be topographic diagnosis.

II.6.2 FAULT, a moderate fidelity training

Note that there are no feedback loops in the TASK training. A later version including such complicating feedback circles [Rouse 1979] exists but was not often used since it was quickly overtaken by a moderate-fidelity training game, called FAULT [Rouse, 1981]. In the FAULT training, the system structure is not randomly generated, but resembles a real system, for example a car engine or a supertanker propulsion system, see figure II.9. The elements represent functional elements of the real system. The failing element is again randomly generated. The bad output is given in the shape of a verbal symptom, for example: "Car engine cranks but does not start". Major tools for finding the element which is at fault a r e , apart from the verbal symptom: some gauge readings, the

pos-Bottery

Fuel

Quontily Fuel lank Fuel Line

Starter Bendi> Ignition ond Starter Switch)-, Slarter Solenoid Fuel r | Pump Fuel Filter Carburetor Flool Carburetor -4 Systems Starter Motor

H

Flywheel RingGeai Timing Gears

F

L Distrit

Lr

Intake Wolves Induction System Exhaust Valves Distributor Cap Plug Wires Spark Plugs Manifold Pressure Fuel Pressure Fuel Quantity

Figure II.9. An example functional block diagram for the moderate-fidel­ ity training method called FAULT. The depicted system is a Car Engine. The

(44)

sibility to test whether the outcome of a specific element is normal or abnormal, and the option of bench-testing elements. The failure probability distribution over the different components can be non-uniform. There are some more details periferal to the program, like different costs for different tests and repairs etc., but these are not essential for the scope of this chapter. Obviously, the FAULT task allows some symptomatic elements in the search sequence. It enables the trouble-shooter to apply his general knowledge about the system in addition to the depicted system structure. The emphasis is, however, still on the topo­ graphical strategy of diagnosis. More recently another CRT-based program was developed. Some system dynamics are build in this time. It is called PLANT, and it concerns a network of tanks of which the levels have to be kept acceptable in spite of introduced failures [Rouse and Morris, 1981; Morris, 1983].

Rouse and co-workers used TASK, FAULT and PLANT in many studies. As ex­ pected, a positive transfer of TASK, to FAULT was established, e.g. people trained with TASK were better at the FAULT task than people only trained with FAULT [Hunt, 1981]. Also positive transfer of both TASK and FAULT to real life trouble-shooting in an aircraft maintenance domain was shown [Johnson, 1980]. Several descriptive models of aspects of operator behavior were developed using the programs. These concerned items like prediction of solution time and selection of tests. [Rouse, 1978a; Rouse and Hunt, 1981; Rouse and Hunt, 1982].

II.6.3 Conclusion

Although Rouse c.s. do claim that their non-high fidelity training methods are also successfully applied in complex systems like a large ship steam propulsion plant (Marine Safety International, New York)[Rouse, 1983], there are not suf­ ficiently detailed publications in this confidential area to make a judgement possible.

The available publications concern only relatively simple, straightforward systems; the methods apparently suit very well in at least that domain. Training with these methods will bring the operator's level of thinking down from the knowledge-based level to the rule-based level for relevant items. Secondly, the ratio between the topographic and symptomatic (section II.3.2) side of diagnosis will be shifted towards the symptomatic side by these trainings. What needs conscious attention

(45)

Low- and moderate-fidelity training 29

and deliberation in the beginning, can be done quickly and without much attention after a while. It looks worthwhile to try and implement this kind of training in the TNO-IWECO situation.

II.7 Promissory perspectives, tracks to follow

In this chapter, some relief was given to the branch of fault management, and some relevant literature was discussed.

The outcome of these investigations in the field of psychological theories/models, including Rasmussen's behavior classification and taxonomy of mental representa­ tion, is, so far, the following. No theories/models were found that are directly applicable to our general fault management problem. In particular the psychological theories are far from ready for application to real world problems. Rasmussen's derivations from these theories stand closer to our questions, they certainly induce the impression to have gained a better insight into the operator's behavior, but also these are far from handy recipes with which.answers to our questions might be generated.

The approach of production systems appeared successful in the creation of models describing operator behavior in specific tasks, and it is the main building block of most efforts in artificial intelligence (not elucidated in the foregoing). However, the production system point of view does not directly support insight into the deliberations of the operator. If the psychological theories/models were judged to be too vague, too general and not pragmatic enough, these are too specific and too pragmatic.

Low and moderate-fidelity training, however, seem to have the potential to be of significant value in fault management training. Apart from this, being a relative simple and completely defined version of a problem relevant for fault management, it proved to be a valuable vehicle for experiments. The foregoing two sentences apply undiminished for the Fault-Symptom matrix approach as well.

Promissory perspectives exist therefore in the direction of Fault-Symptom matrix exercises and the application of the low and moderate-fidelity training methods; the intention is to proceed along these tracks. But apart from the intention to

Cytaty

Powiązane dokumenty

Generał Langner przedstawił mi sytuację dookoła Lwowa w tak czarnych kolorach, że nawet wątpił, czy uda mi się wyjechać ze Lwowa, wobec czego nie przydzielił mi żadnego

„Iinną częścią religii praw d ziw ej jtest nasza pow inność w obec człow ieka.. A ugustyna, zw ykło

In this article many theories concerning knowledge sharing motivation have been discussed and to some extent integrated in a comprehensive model. In this way, a number of potentially

The concentrations of PM 10 and PM 2.5 , chemical composition of PM (16 PAHs and 7 heavy metals, respectively, gas chromatography and atomic absorption spectroscopy),

4.5.. Denote this difference by R.. In a typical problem of combinatorial num- ber theory, the extremal sets are either very regular, or random sets. Our case is different. If A is

The above considerations show that the knowledge of the structure of bijective linear maps on B(X) preserving operators of rank one (idempotents of rank one, nilpotents of rank

крывающие новый этап изучения языка В­К и способов его публикации. Что же касается эдиционной части, то ее принципы были апробированы в предыдущем томе, который,

relationship between the sources of information on proper nutrition and foods selected by 21.. consumers and their level of knowledge within