• Nie Znaleziono Wyników

Workshop Functional Safety of Programmable Electronic Systems

N/A
N/A
Protected

Academic year: 2021

Share "Workshop Functional Safety of Programmable Electronic Systems"

Copied!
113
0
0

Pełen tekst

(1)

Functional Safety of

Programmabie Electronic Systems

(PES)

Workshop

April 7, 1987

H-t---+-+---h:f:

""

++-t~~++~t_+__l

Delft progress Report

1-t-t----W-t--+-t--+-t---f"i;O~+_+_+_+_+----f""oo.r---+----l

Del ft U n

i

ver s

ity of

1---lF.l-:':"'-J---+-++~~~~+_+_+--+-1H

Tech

nology

(2)

~~~~~~--~---DELFT PROGRESS REPORT (1986-1987) 11

Workshop Functional Safety of

Programmabie Electronic Systems, April 7, 1987

Delft, The Netherlands, April 7, 1987

Organised by the Working Group on Electrical Engineering and Safety, Delft University of Technology

Contents

Preface 160

Introduction 161-163

B.K. Daniels / Functional Safety in Hardware/Software Systems 165-182 John Eva / System Safety, the Goal of Software Design 183-195 M.J. Langhout / Reliability, a Consequence of Software Design 197-209 R.A. Roe / Human Reliability and Interface Design 211-227 J. Ward / Software in the Design of Reliable Medical

Instrumentation

R.H. Bourgonjon / Reliability Aspects of Large Software Systems

A.E. Weinert / Hardware Implemented Fault-Tolerance and Safety for ProgrammabIe Automation Systems

229-240

241-254

255-265 159

(3)

160 11 (1986-1987) DELFT PROGRESS REPORT

ProgrammabIe Electronic Systems (PES) ar~ permeating in all corners of our society. The functional safety of these systems is becoming essential for the functioning and the safety of this society. What kind of measures can be taken to ensure that the systems available have the highest degree of safety and reliability? Research into the kinds of deficiencies that endanger these systems

change exchange plines

is an obvious method. International contacts to facilitate the ex-of different views is another. But more can be done: there can be an

of views between different disciplines. For the number of disci-becoming more and more dependent on PES is growing. These interna-tional and interdisciplinary contacts will result in a growing understanding and growing standardisation of terminology and technology.

This workshop can be seen as one of these contacts. But it also tries to arouse the interest of the academie world in this emerging and promising field, one which might even turn into a separate discipline some day.

Thanks are due to the Programming Committee (dr.ir. L.R.J. Goossens and dr. ir. R.P. van Wijk van Brievingh) and to the editors of this special issue (ir. F. Koornneef and prof.dr. A.R. Rale) for their sterling work in setting up the workshop and producing these proceedings.

Prof.ir. J.L. de Kroes

(Chairman of the Working Group on Electrical Engineering and Safety)

(4)

DELFT PROGRESS REPORT (1986-1987) 11 161

Introduction

The Working Group on Electrical Engineering

&

Safety consists of staff mem-bers of the participating laboratories, with an executive board composed of the following members:

Chairman Secretary Financial manager Founding laboratories prof.ir. J.L. de Kroes ir. F. Koornneef J. Zwijnenberg

Safety Science Group

Telecommunications and Traffic Control Laboratory Laboratory for Control Engineering

High-Voltage Technology Laboratory Microwave Laboratory

Participating laboratory: Laboratory for Biomedical Engineering

In 1980, the formation of the Safety Science Group within the Faculty of Ge-neral Sciences made it possible to develop and apply system safety techniques to research projects in the area of electrical engineering. The Working Group on Electrical Engineering

&

Safety was founded in 1981 in order to stimulate research and education regarding the safetyaspects of the design, de vel op-ment and use of electrical or information (sub-)systems.

To date, the Working Group has generated a number of master's level projects for students, mainly in the area of traffic systems safety (road, rail and vessel traffic systems) as weIl as health care (hospital) systems. These pro-jects are aimed at either system modelling and the derivation of safety as-surance requirements or to systematic hazard analysis using an extensive tooI kit such as MORT: Management Oversight and Risk Tree. The Working Group, ad-hering to its requirement that at least two of the participating laboratories must be involved, has stimulated several research projects and supported ex-isting and post-graduate courses where safety is of relevance.

(5)

162 11 (1986-1987) DELFT PROGRESS REPORT RESEARCH

Research is currently being conducted in the areas of systems modelling and the development of analytical tools for system safety assurance.

1. FUNCTIONAL HARD- AND SOFI'WARE SAFETY

The major Working Group project is aimed at the problems arising when infor-mation subsystems are applied as parts of larger systems. The growing vital role of such subsystems introduces new types of potentially major hazards and new dimensions of quality assurance in relation to data integrity. The over-all system performance can be violated by insufficient software and hardware reliability. In addition, more attention must be paid to man-machine inter-faces in the light of the growing role of people as decision-makers in com-plex work processes and the dynamics of 'human error'. This project is also connected with the IEC Advisory Commission On Safety (ACOS), Working Group on ProgrammabIe Electronic Systems:

ACOS/WG-PES.

Il. EXPERT SYSTEM ON "HUMAN ERROR" CONDITIONS IN CLINICAL PRACTICE

A pilot project in progress focusses upon the dynamics of human error in clinical processes and the functional hazards which are related to the ap-plication of data-processing technological subsystems. The functional speci-fications of these systems are frequently less than adequate with respect to the working processes in which the clinical operators are engaged.

lIl. ROBOT SAFETY

Robotics projects within the Department of Electrical Engineering focus upon the design and development of sensors and programmabIe con trol systems. The systematic review of robot con trol systems with respect to intrinsic as weIl as functional safety is a new activity of the Working Group.

IV. COMPUTERIZED SYSTEM SAFETY ANALYSIS TOOLS

The complexity of organisational systems requires advanced analytical tools for assessment and review of safety assurance programs, for instance when an

(6)

DELFT PROGRESS REPORT (1986-1987) 11 163

accident has occurred. The US Department of Energy has initiated the develop-ment of the Managedevelop-ment Oversight and Risk Tree which is a powerful analytical tree for safety program review and analysis of accidents. A major feature is that (potential) accident processes resulting in damage can be systematically connected to weak management functions.

No commercially available version of ComputerMORT is yet suitable for the de-velopment and validation of less complex, but yet consistent trees, although there is a need for this type of program. The Working Group project is aimed at the development of an innovative computer implementation of the MORT-tree, including the required validation of simplified trees.

V. SAFETY ASSURANCE IN VESSEL TRAFFIC (SUB) SYSTEMS

Small scale innovation projects on route guidance systems for vessels are currently aimed at the development of a low co st anti-collission warning system for small vessels. A prototype is expected to operate in 1987.

VI. SAFETY STANDARDIZATION

The improvement of safety-consistency in standardization is a growing point of concern and has led to active participation in the revision process of the IEC Publication 513 on "Basic aspects of the safety philosophy of electrical equipment used in medical practice".

(7)
(8)

- - - -- - - -

-DELFT PROGRESS REPORT (1986-1987) 11

Functional Safety in Hardware/Software Systems

B.K. Danie1s

NationaZ Computing Centre Ltd. Manchester, UK

DeZft Progr. Rep., 11 (1986-1987) pp. 165-182 Received: May 1987

The use of programmable electronic systems in industry has grown considerably with the availability of microcomputers. These systems offer many bene fits to the designer and user in providing more comprehensive control of industrial processes, environments, machine tools and in robot installations.

However, the rapid evolution of these programmable systems and their use in applications by companies having no

specialist expertise in safe computing may lead to

unnecessary levels of hazard and death or injury to plant operators or the public. A number of examples of actual and mythological incidents are discussed later. Fortunately there are only a small number of documented incidents where death can be directly attributed to the programmable system.

As confidence grows in the safe performance and application of programmable systems, the users and suppliers are

considering incorporating many more safety functions within the functional requirements of programmable systems.

There is a need for guidance on how to specify, design, test, use and assess programmable systems having safety functions. This guidance is available from a number of sources and its nature is described later. The guidance needs to be accessed by the many engineering disciplines who play a role in

providing and maintaining the overall safety of systems, since the programmable system is only one (of ten small) part and its performance is both influenced by and reacts with the larger system to which i t is interfaced.

In deciding what is 'safe enough' for a particular application of a programmable system, the appropriate industry, national and international safety criteria will need to be identified and applied. There is a wide range of criteria from which to choose and there are no harmonised European criteria, even though Community countries must conform to the Community Directives on Hajor Hazards (Seveso Directive) and Strict Liability.

Whilst i t is a difficult area in which to work, there is no fundamental reason why programmable systems cannot be as safe as the systems they are replacing. This should not be seen as complacency, and for each application of a programmable system the potential for hazard needs to be identified and appropriate steps taken to reduce the probability and magnitude of each hazard. The evidence is that a well defined and controlled approach to developing programmable

(9)

166 11 (1986-1987) DELFT PROGRESS REPORT systems can lead to acceptable safety and also towards

improved plant performance and the economic advantages associated with improved plant availability.

INCIDENTS ASSOCIATED WITH PROGRAMMABLE SYSTEM FAILURES This paper concentrates on the functional safetyaspects of programmable systems. Fortunately there are only a few documented incidents where the death of a person has been directly attributed to a failure in computer hardware or software. It is necessary to analyse these incidents to understand what happened, why i t happened, and to devise new ways of preventing that particular incident recurring and to obtain the maximum generic bene fit for use in other system developments.

An incident occurred in the USA with a computerised radiation therapy machine (1). The computer controlled the

sequencing of the radiation machine to expose the source and to shut-off the source, and by timing and directional control to vary the total radiation dose received by a patient. A patient received a radiation dose some 80 times greater than was planned for a particular treatment session. The patient had previously been recovering through radiation treatment from a skin cancer. The high dose received resulted in his death. The Federal and State regulations attributed the failure to the machine's software, and the manufacturer acknowledges that its equipment may have been partly to blame. At the time of writing insufficient detail of the incident is available to be more precise as to the sequence of events leading to the death or to the detaile~ design of the therapy machine and the procedures used to develop and validate the software.

An incident occurred in Japan at a manufacturing plant using robots. A worker was crushed to death by a robot. Again, there is imprecise in format ion on what happened and what role if any the computer system controlling the robot played in the incident. It is clear that safety of product ion workers was compromised by:

a) Allowing people to gain access to a hazardous area. b) Failing to detect the presence of a person.

c) Not providing an automatic safety act ion (such as stopping robot movement) perhaps by interlocks on fences/guards when there is--a

hazara

-tö-personnel. There are obvious safety implications in military systems. In normal circumstances, ie peacetime, missiles and other weapon systems must not be released accidentally, and if they are accidentally released must be capable of being directed away from large population centres, destructed in-flight, and/or must not explode on reaching the target. There are many

safety systems which aim to provide these functions, and i t appears they do perform this function on most occasions.

(10)

- - - -

- -

-DELFT PROGRESS REPORT (1986-1987) 11

However, in exercises off the Oanish coast a missile was accidentally released which headed inland and destroyed and damaged a number of holiday homes. It was fortunate that i t was not the peak holiday period, and no one was kil led or

injured. Although the result of this incident is known, the cause is not, perhaps due to military security.

In other missile systems being constructed in the USA, i t was discovered in analysis that i t was possible to launch the missiles accidentally if strict procedures were not followed precisely. There are many mythological incidents which are quoted for military aircraft. The fighter with a

computerised navigational system which went into inverted flight every time i t crossed the equator (due to sign change

in trignometric algorithms?), the satellite launch vehicle

whose second stage accelerated towards earth (another sign wrong) .

Ouring the Falklands War the UK naval vessel Sheffield was hit and sunk by an Exocet missi1e, some 20 people died and many were injured. It has been variously reported that: a) The system identifying missiles as being from the enemy

was inoperative at the time (operator error or interaction between systems).

b) That the Exocet missile had not been entered into the system as an enemy missile.

c) That Sheffield's radio-telex system operated on the same frequencies as Exocet and was in use at the time of the incident.

The computer systems designed to defend Sheffield and to attack enemy missiles did not function as required and may have led to a false sense of security.

Because actual, well reported, computer-originated safety incidents.are few and far between, i t is necessary to examine incidents which, in slightly changed circumstances, 'could have led to injury, fatality or damage to the environment. There are more incidents in this class, but again not many, and 50 incidents which have led to financial loss are also included as a valuable source of experience.

A near miss incident occurred in France on 20 June 1984 at La Croux on the Tarn river (2)(3). Electricite de France have a small hydro-electric power station at this location, and the water level behind the dam is controlled by a number of gates. Some of these gates are controlled by computer. Oue to a number of factors, a large volume of water was released from the dam equivalent to the highest expected f100d levels experienced on this river. It was fortunate that the incident occurred in June, rather than July or August, since a' number of camp sites were flooded and were unoccupied at the time. The incident occurred as a result of sensor failure, updating of the software and hardware of the

(11)

168 11 (1986-1987) DELFT PROGRESS REPORT

programmable systems controlling the gates, and an

operational problem due to gate vibration which had been solved by controlling pairs of gates from the computer rather than independent gate controls. Whilst EdF had well

developed techniques for ensuring safety on their nuclear power plant applications of computers, they had less well defined quality control procedures for their other plant, in particular the hydro-electric schemes. As a result of the La Croux incident, they have assessed the safety implications of all their computer systems and have introduced new

management and quality assurance controls during design,

installation, modification and maintenance.

We should all aim to gain the maximum benefit from such

incidents. We want to avoid death and injury, and do as much as is technically and economically justified to achieve this goal. Even for non-safety related systems there can be safety side effects. Take the example of a computer failure at Express Newspapers (4). A competition had been set by computer, but the computer derived solution was faulty. This led to a large number of people believing they had won, and as instructed they telephoned to claim their prize by the advertised deadline. This caused a failure in a telephone exchange due to the high call rate (another computerised system). The telephone exchange failure led to a large number of telephone users having no service, including the emergency numbers to contact police, ambulance and fire

services. I am sure that no-one in the company operating the computer producing faulty numbers can consider these

consequences. Nor had the telephone exchange designers dealt adequately with the potentially very high call rates that might be experienced in this circumstance.

Staying on the theme of newspapers, electronic publishing is at long last being introduced in the UK newspaper industry. But this has not been a smooth process. Two new newspapers have experienced software problems. Today (5) and London Daily News (6). Whilst these systems are new to the UK, they have been in use in other countries for a number of years. They are further evidence that a change of user, environment and application is a good way of revealing faults. The message is clear for safety related applications of computers.

The financial world also has had its problems. An incident at the Abbey National Building Society on 3 November 1986 resulted in service to 4000 counter-top terminals in the Society's 800 branches being unavailable for several hours (7). A new Integrated File Handling System was the

source of the problem. The new system was kept on-line to allow the faults to be located, but this took time, and further outages occurred on the 7 November when the system was re-tried. Since then the system has worked at an availability of 98.7%. This incident caused a backlog of almost 1 million transactions. The software had been extensively tested off-line.

(12)

DELFT PROGRESS REPORT (1986-1987) 11

At the Bank of Scotland 245 cash dispensers failed for four hours on a busy holiday weekend (8). This failure was attributed to an applications level package running on the bank's IBM mainframe computers. Many customers were

inconvenienced. Other failures have occurred in banking systems which have been attributed to IMS software from IBM. This software is in use in many locations worldwide and i t is easy to see that faults in system software could be the cause of common mode failures simultaneously worldwide. Other banking system failures have led to large unauthorised

withdrawals from automatic teller machines (9,10). There is increasing integration in the UK banking systems to provide a wider range of inter-bank services for customers. In the near future there will be links established to allow a customer of one bank to draw money, obtain statements and make electronic funds transfers on the automated tellers of any other bank. If this system had a fault which could cause the whole of the UK banking systems ATM's to cease services, there would be severe financial implications, a loss of confidence in the banking services, and even potentially a national financial crisis. Of course, much work is done to prevent such an incident, and many of the techniques used in safety analysis have application to security and integrity analysis. Also the systems used in accountancy which are based on manual accounting systems have a high degree of fault toleranee, fault detection capability and are subject to external audit. All these have their equivalencies for safety-related computer applications.

A further example of a financial system causing problems is the increasing use of computers to support human decision making, or even to make automatic, the buying and selling of

shares on the Stock Exchange. In a recent incident in the UK (11) the Post Office's pension fund computer went on a

spending spree and bought $300H. Subsequently the share prices have increased, so the computer software got i t right. However, in the USA there was a slide in the stock market which appears to have been initiated by a number of computers nearly simultaneously issuing selling orders for the same stocks. In a sense the computer software got i t right again. But the concern is about cause and effect, or self-fulfilling predictions.

There are already plans to support the decision making by airline pilots during routine flights and in emergency conditions (12,13). It is probably only a matter of time before this is achieved. Since 'pilot error' is a very frequently quoted cause of aircraft accidents, the use of expert systems might be the means of an overall reduction in accidents. Both civil and military aircraft currently flying are dependent on the safe operation of computer systems. Against this must be set existing concern about the ability to test and verify expert systems (14).

(13)

170 11 (1986-1987) DELFT PROGRESS REPORT

I have not, so far, touched on education and training and their implications for functional safety. An incident

occurred at one of the UK Schools examination boards (15,16). The examination taken by many UK children at age 16 is called the General Certificate of Education at '0' (ordinary) level. There are a number of organisations, or boards, who set exams in a wide variety of subjects. The boards mark the exams and issue certificates with a grading given for the examinee's performance. The scale is Erom A highest, to Flowest, with employers usually wishing to see a grade C or higher. Due to a new software package, and a failure in human checking

procedures, 800 pupils who should have received an A and 1000 pupils aBwere only awarded a C grade in Chemistry. When the results were published, the parents queried the

unexpectedly low results achieved and this was the first knowledge the board had of the problem. The board now claims

"But we have now adopted new procedural methods, and revamped the program, so that this kind of mistake cannot happen

again". A safety- experienced person would not be so certain.

Extravage,nt claims of success and failure in advertising and in newspaper headlines, is a frequent problem. In the UK we have the Advertising Standards Authority who act as

watchdogs. Standards also apply to computer systems, and there is currently much support for the set of Open System Interconnect Standards and the TOP and MAP protocols based on the standards. In the UK, Government has been encouraging UK industry to adopt these standards, and particularly for CIM, Computer Integrated Manufacturing. As part of this

initiative an exhibition was held in December 1986 known as CIMAP. The exhibition was advertised under the phrase: "They said i t couldn't be done! They were wrong". A more accurate version would be: "They said i t couldn't be done! They were so nearly correct". A newspaper report at the time had the headline "Gremlins threaten CIMPA" (17). The aim of

the exhibition was to demonstrate that equipment from different suppliers could be interconnected and operate as required. The OSI MAP protocols were selected, and

manufacturers equipment was tested for conformity to the standard. The belief was that one conforming product would interwork with another conforming product. However, the pace of development of the standards, and their implementation in product~ led to many difficulties. The exhibition took place, on-time, and interworking was demonstrated, but at the cost of much unexpected extra work. Safety people need good standards, they need conformity to standards, but a well known way of inducing lack of safety is to rush things.

I do not want to leave you with the impression that all is gloom and despondency. This is certainly not the case. Organisations are solving their own problems, and there is muchcollaboration towards addressing common problems.

(14)

- - --~ - - ~-- - -

-DELFT PROGRESS REPORT (1986-1987) 11

To take just two more newspaper references, British Rail have recently introduced a quality management system which defines the whole software life cycle and they are now experiencing improvements in specification, development times and us er satisfaction through the improved quality of software and its associated documentation (18).

~ similar move is underway within the Government body the Central Computing and Telecommunications Agency who advise government departments on computing strategy, policy and implementation. The newspaper headline is highly relevant "CCTA to crack the whip af ter IT systems flop" (19). They are doing this through the introduction of standard analysis methodologies linked with quality management systems to recognised national and international standards.

For safety related computer systems we aim for the ideal of no safety incidents, we cannot afford to have system flops.

GUl DANCE ON SAFETY CRITICAL COMPUTER SYSTEMS

There are a number of sources of guidance already available to the suppliers, installers, users and licensing

authorities. Rather than review the whole field, I will concentrate on three interlinked developments and some UK initiatives.

EWlCS TC7 GUIDELlNES

The first organisation in Europe to consider Safety, Security and Reliability of Industrial Computer Systems with a view to providing guidelines was the international group which became known as the European Workshop on Industrial Computer Systems

(EWICS TC7). This group first met in 1974 and still

continues to attract experts to participate in its work and Workshop/Symposium series.

EWICS TC7 membership is largely drawn from users of industrial computers (as opposed to manufacturers or

suppliers) and these vary from small to very large companies, including some of the largest European nationalised

organisations and multinational public companies. In quarterly meetings the members meet to compare experiences and to prepare guidelines for their own use and to the general benefit of European Industries.

To date six guidelines are completed (20-24) and i t is expected that this will be published as a set in book form early in 1988. Current work aims to complete a further four guidelines (48-51) by the end of 1987. The latest work addresses:

Systems Integrity;

Safety related measures to be used in Software Quality ~ssurance;

Design for Systems Safety;

Reliability and Safety Assessment.

(15)

172 11 (1986-1987) DELFT PROGRESS REPORT l t is planned to publish these guidelines in a second book in 1988.

The members of EWICS TC7 have benefitted their companies directly from the work and results of producing guidelines, and many companies have made use of this generic guidance in constructing their own company guidelines. A wider audience has had access to the work via the SAFECOMP Workshop Series

(26-39) .

EWICS TC7 also maintains extensive links with national and international standards bodies.

The prime link with the IEC and ISO is via members of their technical committees and working groups who are also members of TC7. In this way the following topics are covered:

IEC TC44 IEC TC45 ' SC45A IEC TC47 SC47B IEC TC58 IEC TC57 IEC TC65 SC65A

Electrical Equipment of lndustrial Machines WGl Aims and means of standardisation of electrical equipment of industrial robots and processor control systems.

Nuclear lnstrumentation Reactor lnstrumentation

WGA3 programmed digital computers important to safety in nuclear plants.

Semi conductor devices and integrated circuits Microprocessor systems.

Reliability and Maintainability WG10 Software Aspects.

Telecontrol, teleprotection and associated telecommunications for electric power systems.

lndustrial Process Measurement and Control Systems Considerations

WG6 Standard performance specification for programmable controllers

WG8 Evaluation of the integrity of system functions

WG9 Study on Safe Software.

SC65B Digital data communications for measurement and control systems.

IEC TC74 Safety of data processing equipment and office machines.

IEC TC83 lnformation Technology Equipment Functional Safety of IT Equipment.

IEC ACOS Advisory Committee on Safety.

EWICS TC7 members also attend some working groups of ISO TC97, ISO TC176 and ISO TC184.

(16)

~ - - - - - - - -

-DELFT PROGRESS REPORT (19B6-1B76) 11

At the National level, there is a further set of links via common memberships of various committees and working groups of the National Standardisation Bodies such as BSI, DIN, AFNOR. This may be via membership on the national co~ittees to the relevant IEC and ISO committees, but also provides a wider coverage of the work on safety and reliability

standards that apply to industrial use of computers. IEC TC45/SC45A/WGA3 has published a standard based on the EWICS TC7 Software Development position paper (25).

THE UK HSE GUl DANCE ON PES

In 1982 the UK Health and Safety Executive (HSE) commenced work on Draft Guidance on the safe use of programmable electronic systems. Much of the work towards the documents published in 1984 (30-32) was sub-contracted to my former company the UK National Centre of Systems Reliability, and I was fortunate to be able to lead the team which prepared volumes 2 and 3. The HSE are the official organisation in the UK dealing with all industries and the safety of the people employed and the equipment installed. They were

experiencing a rapid increase in the number of safety related computer systems being installed, and industry was seeking their guidance on how to meet the safety requirements. The Draft Guidance was issued to industry for consultation which continued over two years. The definitive Guidance will be issued in 1987, and UK industry will then be obliged to follow the guidance.

The HSE Guidance documents contain several important messages. The first is covered by the phrase "reasonably practicable (RP). This requires the user of the system to do everything which is RP to ensure safety. This includes

financial considerations where arguments to balance the risk of injury against the cost of impoving safety are allowed. If the remaining hazards are of a minor nature, then the us er would not be expected to spend a lot of money to remove the hazard. However for greater hazards, then the user would be

expected to spend money to reduce their frequency or

magnitude. No account is taken of the ability of the company (in a financial sense) to afford any hazard improvements. The same hazard would require the same degree of improvement irrespective of company size or financial status.

A second message is that the cost of assessment mayalso be balanced against the potential hazards. So a low hazard potential system, provided i t is assured i t is low hazard, would need a less intensive assessment than for a system with high hazard potential.

The third message is that whenever possible, quantify the reliability and safety integrity of the programmable system, where this is not possible then use qualitative techniques. A number of checklists are provided to support the

qualitative analyses, and of course qualitative analysis usually precedes quantification.

(17)

174 11 (1986-1987) DELFT PROGRESS REPORT Whilst i t is of ten possible to quantify assessment of the

hardware parts of a system, the Guidance takes the firm line that i t is not currently feasible to quantify software

reliability or safety. Many of the checklists therefore apply directly to software.

A fourth message is that the architecture, or configuration, of a programmable safety system is an important factor in determining whether the system is likely to meet safety criteria. Guide figures are given of the mean and range of reliability performance to be expected of example

configurations. Where the hardware of the system has

parallel or redundant channels, then the use of diversity in the software is recommended as an aid to improving overall safety by reducing the potential for common mode failure in the software.

One of the inputs to the HSE documents was that of EWlCS TC7. HSE plan to extend the Guidance to cover in greater detail recommendations specific to Software Specification and Development.

THE CEC COLLABORATlVE PROJECT ON PES ASSESSMENTS

Whereas the HSE Guidance addresses how to specify, design, and install a safety related PES, and to assess its safety

integrity, a project started in 1983 specifically addressed issues in assessment as they applied in 7 organisations in the UK, France, Germany and Denmark.

The result of this collaborative study is reported in a book I edited published in 1986 (33).

The aim of the study was to provide a framework for the safety-integrity assessment of programmable systems. The framework provides a reference structure to which are related life cycle phases in the assessment process, the techniques to be used at each phase, and the need to select and compare against acceptable safety criteria.

The CEC project ran in parallel with the consultative phase and thus had an influence on the HSE documents appearing in 1987. Also the project showed that despite rnany historical differences in practices, legal requirements, and safety criteria it proved possible to agree a frarnework for assessrnent which was acceptable in all four countries.

This work has an influence on the lEC Advisory Committee on Safety who have proposed new working groups to produce standards based on the principles contained in the HSE Guidance and the CEC Project results.

UK lNlTIATIVES AND RELATED PROJECTS

There are a nurnber of current initiatives in the UK and Europe which are addressing Software Engineering. A number of these 'are relevant generically to safety related

applications of computer systems, others are directly relevant.

(18)

DELFT PROGRESS REPORT (1986-1987) 11

THE UK STARTS PROGRAMME

The Software Tools for Application to large Real Time Systems (STARTS) initiative started in 1982 in response to a r~port from the National Economic Development Office (NEDO). The report noted the low use of software tools and the fragmented approach to development and use of tools.

Four years later, the initiative has achieved significant milestones. In 1984 the STARTS Guide was published (34). The guide provided advice on the choice of software tools and methods for large real-time software. It concentrated on the management and control aspects of size and complexity, these having been identified as the major deficiencies in dealing with ever larger software.

In producing the Guide, a number of working groups were established to deal with:

Project Control;

Requirement Specification; The Design Process;

Verification, Validation and Testing; Version and Configuration Control.

This work was a collaboration by 40 experts from 17 UK organisations.

Major sections of the guide introduced the concepts of the

software tools, techniques and methods and related these to a new life cycle model which has been adopted in other work in the UK. Management of quality, software configuration and the project were also related to this model. Definitions of project tasks by activity and life cycle were presented. The guide established criteria for selecting a number of

'front runner' software tools. The final list was 24, and

each working group considered the tools relevant to its area

of activity. Five detailed sections of the report dealt with the state of the art, theory and practice for each activity.

Each tool considered was described in detail in an appendix,

and a structured listing of the non-technical (availability,

deliverables, maturity, cost) and technical characteristics

(application, extensions, us er feedback) enabled easy

comparison of similar tool functions.

The report was widely read, and currently an update is being

prepared for publication in 1988.

A series of debrief reports (35-44) has been prepared on a number of tools: SLIM ARTEMIS PRICE-S CORE JSD SOFCHIP VDM SDL SAFRA Z 175

(19)

176 11 (1986-1987) DELFT PROGRESS REPORT

and forms a further major milestone in the programme. These reports have been prepared by us ers of the tools and describe experience of use, and share this with others. The supplier of tools has also been given the opportunity to comment in

the debriefs. Zand VDH are formal languages for specifying

systems and are thus highly relevant to safety applications of programmable systems.

Combined with the STARTS guide, these debrief reports provide a valuable database for the potential tool user.

However the STARTS programme has progressed further in encouraging suppliers of software to adopt the best current practices in Software Engineering. An important part of the strategy was the formation of a Public Purchasers Group (PPG) comprising:

Ministry of Defence;

British Telecom;

British Gas Corporation; British Steel Corporation; Civil ,Aviation Authority;

Central Electricity Generating Board;

South of Scotland Electricity Board.

This has enabled a co-ordinated and constructive demand from

purchasers of systems for their suppliers to use the best

software engineering practices.

The major milestones resulting from the PPG are the STARTS

Purchasers' Handbook (45) published in 1986, and the adoption

of the handbook by all members of the PPG as part of their

procurement process. The handbook has achieved a wide

distribution in the UK in a short time. The supply industry are actively considering how they can best and most effectively match the requirements of the handbook.

Following introductory chapters to STARTS and an overview of

the problems encountered when developing software, the

Handbook then describes the Project Lifecycle. Further

chapters cover:

Requirements specification;

Invitation to tender and guide to compliance; Content of the tender and its evaluation; Product acceptance;

Software methods and tools.

Each chapter cross references the STARTS Guide (34) and calls up relevant standards and guidelines and gives examples of tools.

In four significant appendices the Handbook covers: A Purchaser's manifesto

B Structure of a requirements specification C Descriptions of methods and tools

(20)

DELFT PROGRESS REPORT (1986-1987) 11 177 Work in the STARTS programme continues with an update of the Guide due in t4e second quarter of 1937, an update of the Hand-book due at the end of 1937 and discussions ón the ~ost aprropriate form of contract for software work, eg-at Requirements phase on a cost-plus basis and on when competitive tenderin0 is most

appropriate. An extension of this real-time work into Data-Processing use of tools and methods has now been ag reed and the programme is being defined.

THE SOFTWARE DATA LIBRARY (SWDL)

This is a UK Alvey project which is now in its second of three phases:

Phase 1 Feasibility Phase 2 Implementation

Phase 3 Self supporting library

The current phase ends in 1988.

The main aim of the project is to create a centralised facility which keeps and provides statistical information about all aspects of software development. This will benefi t the software industry through data collection which will encourage effective management and control, and will assist the predictability of cost, quality and reliability. Users of the library will benefit from the feedback of quantified data. The library is being created to collect metrics at any level. This will assist managers in selecting the

appropriate environment and process to create software which performs as required.

This is a collaboration between:

NCSR - The National Centre for Systems Reliability GEC Software

ICL LOGICA NCC

Systems Designers.

The database is located at The National Computing Centre in Manchester (46).

The British Standards Institute is considering adopting the SWDL data definitions and data collection forms as a UK standard.

REQUEST PROJECT

This is a project within the European Communities ESPRIT programme. It is a large project which began work in 1984. The main tasks of this project are concerned with:

1 Quality modelling for software. A constructive quality model is being researched and tooI support is planned. The model, COQUAMO, was inspired by the work of Boehm for the COCOMO constructive cost model .

(21)

178 11 (1986-1987) DELFT PROGRESS REPORT 2 Reliability modelling for software. Consideration of new

models for a wide range of required software reliability targets is underway. The model(s) will be supported by tools. The need for new types of models is established by the poor performances of existing modeis, particularly in their predictive capability and portability between

different applications and processes.

3 A database to collect and hold the quantified data to support the model creation and validation.

The name of the project is derived from these aims, Reliability and Quality of ~uropean Software !echnology. The work is collaborative between:

STC/ICL UKAEA (UK) AEG GRS (D) THOMSON INFORMATIQUE-INTERNATIONALE (F) ELECTRONIK CENTRALEN (DK) ESA CONTROL (I) A joint working party of SWDL and REQUEST has agreed the standard for data collection, and common forms and database structures will be used. The REQUEST database is located at the UKAEA site at Winfrith (47).

INTEGRATED PROJECT SUPPORT ENVIRONMENTS (IPSE)

Much work within Alvey and ESPRIT programs has been devoted to developing IPSEs. These aim to give improved support to the software creator by providing the common services and interfaces necessary between tools supplied by a wide range of vendors. They are intended to allow a software creator to produce an IPSE with a set of tools which is specific to the particular application, the management style, the chosen life cycle and techniques/methods used within the particular

organisation.

NCC act as monitor to the two Alvey IPSE developments ASPECT and ECLIPSE and also as a reviewer of the ESPRIT project SFINX. These are to provide an implementation of the emerging European standard PCTE- (Portable Common TooI Environment) defined within an ESPRIT project.

The relevance of the STARTS programme, SWDL and REQUEST projects and the work on IPSE3 is that the environment in which software is produced will become more controlled, easier to monitor, more amendable to measurement, and therefore more capable of consistently progressing from requirements to delivered and maintained software product than has been hitherto practicabie.

The implications for safety are obvious; experience of other disciplines is that control leads to both understanding and improved performance.

(22)

~ -- - -

-DELFT PROGRESS REPORT (1986-1987) 11

THE SOFTWARE TOOLS DEMONSTRATION CENTRE (STDC)

The STARTS programme and its PPG is in part a technology-transfer activity, partly purchaser pull. If the suppliers of software are to be able to respond to these initiatives, they will need access to the technology.

179

It is a truism that Software Tools are expensive to acquire and install, and acquiring the necessary skill in their use absorbs time for off_and on-the-job training. Easing the process of selecting tools was seen as a major factor in introducing the full range of software engineering into every-day industrial use.

The first step in this process was taken in September 1986 with the opening of the Software Tools Demonstration Centre at the National Computing Cent re in 11anchester. The Centre has a wide range of hardware facilities to mount a range of

vendor-provided tools, and has operated a rigorous acceptance process, based on the recommendations in the STARTS Guide, to take tools from their suppliers, mount them on the Centre's hardware and develop demonstrations.

The STDC, in conjunction with STARTS, is building an

Information Base on Tools and Methods. This is held on-line on a VAX-based ORACLE system at STDC. The database contains details of over 200 items at this time.

CONCLUSIONS

You may feel that your projects are getting tougher, particularly the safety related parts. In this you are probably right. Others have experienced this feeling, they have with like-minded people set down their experience in the

form of guidance documents and this is available to help you achieve safe and reliable computing systems.

There is also a framework available to assist your assessment of whether a computer system is likely t~, or whether i t continues to match up to the appropriate safety criteria. However, all your problems have not been solved, you will need to use professional skills to ensure that quality is maintained throughout system life, and that you have an adequate specification and implementation that meets your requirements. This is not easy, but i t is possible. Much assistance is now available in the form of methods,

methodologies and tools, that are themselves of adequate quality, and also minimise the risk of the more frequent errors propagating through to become faults and eventually failures in your systems.

There are an increasing number of programmable systems which not only demand the attribute of functional safety but also achieve i t.

(23)

180 11 (1986-1987) DELFT PROGRESS REPORT

REFERENCES

1 The Wall Street Journal, 28 January 1987 (Eastern Edition).

2 Report in the Times, 21/6/84.

3 Correspondence with Electricité de France.

4 Report in Computing, 25/9/86.

5 Computing.

6 Computer Weekly 5/12/86.

7 Computing, 20/11/86.

8 Computing, 28/8/86.

9 Micronet 800, Prestel, Page 800111473a, 25/01/86.

10 Viewfax, Prestel, Page 2582200026a, 18/04/86.

11 Computing, 5/12/86.

12 Viewfax, Prestel, Page 258200073a, 8/9/86.

13 Viewfax, Prestel, Page 258200073b, 8/9/86.

14 Computing, -/-/-. 15 Computing, -/-/-. 16 Computer Weekly, -/-/-. 17 Computing, 20/11/86. 18 Computer Weekly, -/-/87. 19 Computing, 4/9/86.

20 Guideline for Verification and Validation of Safety

Related Software. European Workshop on Industrial Computer Systems. Computers and Standards, pp 33-41 Vol 4, no 1, 1985. North Holland. (position Paper No: 3).

21 Safety Related Computers: Software Development and Systems Documentation. European Workshop on Industrial Computer Systems. Published by Verlay TUV Rheinland GmbH, Cologne, 1985. (position Papers No: 1 and 4). 22 Hardware of Safe Computer Systems, Position Paper No: 2,

European Workshop on Industrial Computer Systems, June 1982.

(24)

DELFT PROGRESS REPORT (1986-1987) 11

23 Techniques for Verification and Validation of Safety

Related Software. European Workshop on Industrial

Computer Systems. Computers and Standards, pp 101-112

Vol 4, No 2, 1985. North Holland. (position Paper

No 5).

24 System Requirements Specification for Safety Related

Systems, Position Paper No: 6, European Workshop on

Industrial Computer Systems, January 1985.

25 IEC Publication 880, International Electrotechnical

Cornrnission, Geneva.

26 SAFECOMP '79, proceedings of the Workshop, Stuttgart, Federal Republic of Germany, May 1979, ed Lauber R, pergamon Press.

27 SAFECOMP '83, Proceedings of the Workshop, Cambridge, UK,

September 1983. ed Bayliss J A, pergamon Press.

28 SAFECOMP '85, Proceedings of the Workshop, Como, Italy, October 1985. ed Quirk W J, pergamon Press.

29 SAFECOMP '86, Proceedings of the Workshop, Sarlat,

France, October 1986. ed Quirk W J, pergamon Press.

30 Guidance on the safe use of Prograrnrnable Electronic

Systems: Part 1: General Requirements, draft document

for consultation, Health and Safety Executive, UK, July

1984.

31 Guidance

Systems:

document UK, July

on the safe use of Prograrnrnable Electronic Part 2: Safety Integrity Assessment, dra ft for consultation, Health and Safety Executive, 1984.

32 Guidance on the safe use of Prograrnrnable Electronic

Systems: Part 3: Safety Integrity Assessment Case

Study, draft document for consultation, Health and Safety Executive, UK, July 1984.

33 Ed B K Daniels, "Safety and Reliability of Prograrnrnable Electronic Systems", Elsevier Applied Science, 1986. 34 The STARTS Guide, Department of Trade and Industry,

1974.

35 McRae C, Forshaw R, SLIM, A debrief report, STARTS, NCe

Publications, 1985. ISBN-0-85012-544-8.

36 Rawlings N, ARTEMIS, A debrief report, STARTS, NCC Pub1ications, 1985. ISBN-0-85012-545-8.

37 Mahy E, Day A, PRICE-S, A debrief report, STARTS, NCC Publications, 1985. ISBN-0-85012-546-4.

(25)

182 11 (1986-1987) DELFT PROGRESS REPORT

38 Looney M, CORE, A debrief report, STARTS, NCC Pub1ications, 1986. ISBN-0-85012-547-2.

39 Smith G, JSD (Jackson System Deve1opment), A debrief report, STARTS, NCC Pub1ications, 1986. ISBN-0-05012-549-9.

40 Pratt N, SOFCHIP, A debrief report, STARTS, NCC Pub1ications, 1986. ISBN-0-85012-520-2.

41 Austwick N, Norris M, VDM (Vienna Deve10pment Method), A debrief report, STARTS, NCC Pub1ications, 1986. ISBN-O-85012-574-X.

42 Smith J R W, Sa1mon J, SDL (System Design Language) A debrief report, STARTS, NCC Pub1ications, 1986. ISBN-O-85012-577-4.

43 Price C, SAFRA (an interconnected set of too1s), A debrief report, STARTS, NCC Pub1ications, 1986. ISBN-O-85012-582-0.

44 Norris N, Z (a forma1 specification method), A debrief report, STARTS, NCC Pub1ications, 1986. ISBN-0-85012-583-9.

45 The STARTS Purchasers' Handbook, NCC Pub1ications, 1986. 46 SWDL, Software Data Library, An A1vey Project, Documents

avai1ab1e from NCC Ltd, Manchester, UK.

47 REQUEST, Project 300, Re1iabi1ity and Qua1ity of European Software Techno1ogy, ESPRIT.

EWICS GUIDELINES IN PREPARATION

48 Systems Integrity, Position Paper, European Workshop on Industria1 Computer Systems, due end 1987.

49 Software Qua1ity Assurance and Measures, Position Paper, European Workshop on Industria1 Computer Systems, due end 1987.

50 Design for System Safety, Positi0n Paper, European Workshop on Industria1 Computer Systems, due end 1987. 51 Re1iabi1ity and Safety Assessment, Position Paper,

European Workshop on Industria1 Computer Systems, due end 1987.

(26)

DELFT PROGRESS REPORT (1986-1987) 11

System Safety, the Goal of Software Design

John Eva

Foxboro, The NetherZands

DeZft progr. Rep., 11 (1986-1987 ) pp. 183-195

Received: May 1987

183

The necessity to continuously improve the operating efficiency of ~anufacturing processes has resulted in a fundamental shift from pure Control Systems to Systems that now in addition encompass complex Process Management, requiring large amounts of data gathering and manipulation. This has resulted in a need for more advanced computer systems.

This need for more powerful computers has been made possible by the considerable technological developments in computer hardware which are supported by the weIl established disciplines and techniques of hardware reliability and safety design.

But today and in the forseeable future the larger portion of the total costs of computer systems is in the software and incurred af ter the equipment is released for operational use. When this is considered in the context of the primitive state of the Software Engineering disciplines and techniques, it is readily apparent that there is an urgent need for an immediate rectification of the imbalance.

In many of the applications the consequences for implementing faulty systems are not just financial with loss of production, deteriation of product quality, damage to the process plant. They can ultimately result in the loss of lives.

A reflection of articles written in technical Engineering publications over the last five years reveals a wealth of information on "Software Quality". These articles concentrate on the theme of developing "ReliabIe Software" by using "Software Quality Assurance" techniques throughout the development cycle ensuring that the software product is released according to agreed upon specifications with all deviations and revisions

(27)

184 11 (1986-1987) DELFT PROGRESS REPORT

documented and classified.

It is not sufficient to generate mechanisms that ensure a product conforms to written specifications. The procedures have to start one stage earl ier and encompass the validity of the

specifications. It is in this area where the greatest weakness lies a building is only as strong as its foundations.

Therefore the problem is not confined to the software, the scope encompasses the whole process from System conception through to

proven operation necessitating an "Intelligent System Solution". This can be illustrated by the following extract taken from an aritcle entitled "Endo/Exo Batch Reaction Requires Complex Temperature Control":

"The best method for designing a sound, workable control strategy that has the best chance of long-term

success has three major parts. First, a thorough

understanding of the process is essential. The more complex the process, the more critical a thorough understanding of the process becomes. Second, a weIl defined, realistic set of operating objectives is necessary. Third, a sound understanding of process control technology, based on theory, practical experience, and a practial working knowledge of basic instrumentation (control valves and instruments), is

essential.

Experience shows a process of medium complexity should

have a ratio of process analysis to con trol concept

design of 3 to 1. That is 75% of the effort should go into process analysis and definition of operating objectives.

Today's con trol hardware is powerful and flexible, and should be effectively used to accomplish con trol

objectives. To do this, valuable engineering resources should be applied to understanding the process and its

operating and control objectives. With this knowIedge, the majority of complex control problems can be solved

by implementing weIl designed, simple control concepts.

The alternative approach is spending valuable engineering effort in programming complex solutions to poorly understood process problems with poorly defined control objectives. Past experience has proved that this approach lead to costly solutions that have little

(28)

DELFT PROGRESS REPORT (1986-1987) 11

The knowledge alluded to in the extract understanding of the Safety requirements. the following areas:

*

Hazardous Area

*

Operator

*

Electrical

*

Con trol

*

Operational

Hazardous Area Safety

185

has to include an Here we can identify

Classically Hazardous Area Safety has been handled in one of two

ways. Either by utilizing approved hardware protection devices connected between the unapproved equipment and the device to be protected. Or by designing the equipment to an appropriate set of standards. In the former case the concept of design approval is used, where the component is specified precisely for the

environment and conditions under which it can be used. This

concept is totally alien to the world of software and can be exemplified by the inherent flexibility and diversity in programming languages and the lack of reusable designs. In the latter case, the standards automatically fail the software,

necessitating that the hardware in itself be designed to preclude a hazardous condition existing.

(29)

186 11 (1986-1987) DELFT PROGRESS REPORT

Operator and Electrical Safety

Generally Operator and Electrical Safety hardware, as aspects considered in this area shock.

are determined by are for example:

Con trol and Operational Safety

Control systems can be broadly classified as either Batch or Continuous. Where operation consists of three phases: Startup, Operation, Shutdown. Noting that for Continuous Control operation is usually restricted to the Operation phase. The two phases: Startup and Shutdown require extensive use of Sequence Logic to implement the inferred phasing of actions. For exarnple: in a shared equiprnent plant the process of transferring a batch of material from a fermentor to a shared drier requires checks to be made upon the availability of the drier prior to the start of the transfer. The use of Sequence Logic can extend into the Operation phase. For example: an exotherrnic reaction vessel will require high-high level temperature trip alarm and recovery logic.

These two examples show that it is essential to consider ALL possible permutations of conditions during any phase of operation of the con trol system. If the Safety of the plant is to be maximised.

Considering that the ultimate goal of cornplying to the Hazardous Area standards is to prevent an explosion an apparent dichotomy arises because we expect software in both normal and abnormal operation not to create a hazard or an explosion. For example in a boiler control system under flame out conditions i t is irnperative for the con trol system to take the correct approPr:~ate

(30)

DELFT PROGRESS REPORT (1986-1987) 11 187

action as this condition can resu1t in an exp10sion.

Computer based systems have been used in protection systems. The list be10w gives examp1es of the areas:

*

Nuc1ear reactor safety systems

*

Fire and gas alarm systems

*

Emergency shutdown systems for offshore platforms

*

Machinery guarding and safety inte10ck systems

*

Condition monitoring

*

Environmenta1 discharge monitoring

*

Rai1way signa11ing and interlock

*

Radia10gica1 protection systems

There must be some rationa1 to justify the use of computers in such critica1 app1ications. For this the fo110wing advantages can be 1isted:

*

Setter information display

*

Signa1 filtering and va1idation

*

Ear1y warning diagnostics

*

Complex ana1ysis functions

*

Complex shutdown sequences

*

Running log for incident investigation

(31)

188 11 (1986-1987) DELFT PROGRESS REPORT

*

Self-test and monitoring

*

Cost effective for large systems

*

Ease of modification

And disadvantages:

*

Unpredictable failure modes

*

Limited failure data

*

Software reliability

*

Susceptibility of electrical interference

*

Intolerance to single failures

*

Co'ntrol and protection combined

*

Ease of modification

*

Difficulty of certification

There are two sets of considerations. The first is how to

deterrnine what actions should be taken by software during normal

operation of the software and supporting hardware for both normal

and abnormal system operation. The second is what actions should

be taken by the software during abnormal operation of the

software or supporting hardware for both normal and abnorrnal

system operation.

In all cases the problem of what is a fail-safe state has to be

answered. This may not always be possible and instead a

preferred state may have to be used. This may require extensive

(32)

DELFT PROGRESS REPORT (1986-1987) 11 189

example startup conditions.

It is in the area of the analysis where the complexity of

computer systems renders the task difficult and when the general

method of thinking is taken into consideration likely to be

incomplete. Engineers are trained to think inferentially and

positively. They are not trained to hypothesise or to think

about the abnormal. Consequently failure modes are not fully

defined. defeating the objective of the excercise. It is under

these sort of circumstances when we would normally turn to

~omputers!

It is wrong to consider that the demonstration of the reliability

of a piece of software to be correct as adequate for the safety

of the system.

(33)

190 11 (1986-1987) DELFT PROGRESS REPORT

Fault-tolerant Software

The reliability and operational safety of software can be greatly

increased by ensuring that checks are made. For example: to

protect against invalid data ranges. However, such defensive

~rogramming techniques only cater for a very limited set of

operational faults. A much more difficult problem is to

recognise and correct for errors in the software itself. One

such method that can be used is Fault-tolerant software.

Fault-tolerant Software has the built-in capability to pres erve

the continued correct execution of a software program and

input/output functions in the presence of either software or

hardware faults. The steps involved are first error detection

and second error correction or avoidance. Checking of data is

important. Inputs, intermediate results, and end result must be

checked for plausibility. Timing checks are important in

real-time systems. A common method is to have an external timing

circuit which must be triggered at regular intervals. Control

flow checking is important becuase incorrect execution sequences

will lead to unreliable results. The complexity of most current

software makes the recognition of control flow errors an

extremely difficult task.

Once errors have been deleted, ways must be found of continuing

so that the system does something sensible again in the shortest

possible time. The new operating level may have to be

functionally degraded. Since software faults are permanent

faults (ie, the fault is triggered whenever the same set of

conditions are present), a system with high availability must be

able to recover from software faults gracefully. Two specific

techniques for providing fault tolerance by software redundancy

are N-version programming and recovery blocks.

Cytaty

Powiązane dokumenty

Keywords: gene therapy, dsDNA, siRNA, tricationic surfactants, X-ray scattering, circular dichroism, gel electrophoresis, cell cultures..

We present the generalisation of our earlier notes [3] and [4] in which we considered the problem of existence of a solution for a paratingent equation with deviated

Rappelant, dès les premières images du film, que Les Trois visages de la peur s’inscrit, en 1963, dans une tra- dition déjà bien longue du cinéma horrifique, qu’il y

We state new convergence results of solutions under assumptions concerning contingent derivative of the perturbed inclusion.. These results state that there exists at least one

Abstract. Generalized solutions to quasilinear hyperbolic systems in the second canonical form are investigated. A theorem on existence, uniqueness and continuous dependence upon

Kwasek Advanced static analysis and design of reinforced concrete deep beams. Diploma work, Politechnika

5˚ Jes´li wywoływane jest M α , to nalez˙y podnies´c´ re˛ke˛, jes´li przynajmniej jeden z graczy, których moz˙esz widziec´ (wł ˛aczaj ˛ac siebie), podniósł re˛ke˛,

Is there an algorithm C which takes as input a Diophantine equation D, returns an integer, and this integer is greater than the number of non-negative integer solutions, if the