• Nie Znaleziono Wyników

Proceedings of the 4th Ship Control Systems Symposium, Den Helder, The Netherlands, Supplement

N/A
N/A
Protected

Academic year: 2021

Share "Proceedings of the 4th Ship Control Systems Symposium, Den Helder, The Netherlands, Supplement"

Copied!
72
0
0

Pełen tekst

(1)

2 1

DEC. 1984

ARcHETROCEEDINGS

t.ab. v.

Scheepsbouwkunde

FOURTitchnische

liogeschoQi

SHIP CONTROL SYSTEM§

SYMPOSIUM P1975-7

Suppl.

October 27-31

,

1975

FOURTH

jjjjtjjlSjPOSIuw

TEMS

ROYAL NETHERLANDS NAVAL COLLEGE

DEN HELDER

SUPPLEMENT

(2)

THE SYMPOSIUM WILL BE HELD IN THE NETHERLANDS, THE HAGUE - CONGRESS CENTRE - 27-31 OCTOBER 1975

Statements and opinions expressed in the papers are those of the authors, and do not necessarily represent the views of the Royal Netherlands Navy.

The papers have been reproduced exactly as they were received from the authors.

(3)

CHANGES OF CHAIRMEN

ERRATA

SUPPLEMENT

CONTENTS

SESSION DI:

The plight of the operator

J. Stark and J. Forrest PAPER NOT RECEIVED

SESSION NI:

Naval Ships control reliability: a hardwaresoftware issue. P.P. Dogan

SESSION P2

An experiment to determine the effectiveness of the collision avoidance features of a surface ship bridge control console. A.D. Beary Jr. and W.J.Weingartner

PAPER NOT RECEIVED

(4)

NAVAL SHIPS CONTROL RELIABILITY: A HARDWARE-SOFTWARE ISSUE

BY

Pierre P. Dogan

The Charles Stark Draper Laboratory, Inc. Cambridge, Massachusetts (USA)

This paper looks at conceptual approaches to boosting the reliability of ship control systems, based on current and predicted trends in components, and system architectural technologies. References are made from space and other programs. An intimate mix of hardware and software issues need to be addressed; as hardware component technologies progress, often driven by ad-vances from commercial,not military developments, a need emerges for new hardware and software architectures dedicated to the military mission, which the author feels, the market place of commercial developments is not likely to spontaneously create.

1. TRADITIONAL APPROACHES IN NAVAL SHIP CONTROL DESIGN

Manual controls, systematic reliance on several levels of manual backup, as well as the availability of onboard repairs, have been

traditional assump-tions of the control system design philosophy for naval ships and submarines. The traditional approach in naval ship machinery and motion control can be ascribed to the perception by the military users of a basic lack of high re-liability in available control technology. This perception is now being gradually modified by the adoption of digital technology, usually in the form of substitution of analog equipment by programmable digital controllers such as the standard U.S. Navy AN-UYK-20 minicomputer. (1)

In contrast, numerous naval combat systems have recently been, or are being acquired today, where sophisticated

real-time integration of sensors, effectors, and displays do not fit the traditional manual control approach at all; in these, increased reliance is made on large central computer com-plexes (CCC): central computer comcom-plexes on surface ships and submarines are typically made of varying combinations of several Navy standard AN-UYK-7 main-frames, and constitute the central nervous system of military payloads dis-tributed along the length of the vehicle.

In the last decade of naval combatant

platform design, there was thus, at least for a while, a trend to marry complexmilitary payloads controlled by sophisticated central complexes, to surface or subsurface platforms that relied mostly on manual or only semiautomatic

machinery and motion control, or which only recently were slated to use decentralized minicomputers.

Why this contrast?

What are the reliability virtues of an "octopus" central computer complex wired to many parts of the ship? Alternatively, for how long will legitimate conservatism in ship control design (i.e., maintain safety, reliability) necessarily imply the rejection of automation? Can naval automation be reconciled with safety/reliability and low life-cycle cost? aeliable answers to these questions cannot, of course, be completely given. The thesis of this paper is that the central computer complex trend in combat systems, and the decentralized minicomputer

trend in ship control will eventu-ally merge, as the fundamental reliability

problem still faced by each trend

(5)

-1-is gradually resolved. This resolution and merging will result from steady advances in three technology areas:

Microelectronics (large-scale integration and very large-scale integration packaging), and optional transmission components. Local and distributed fault tolerance.

Ultra-reliable large-scale real-time software made possible by an software reliability approach such as Higher Order Software(HOS).(2,3) These advances are expected to increase by an order of maenitude or morethe

confidence level in naval computer control over the next decade.

The contrast in automation and centralization levels between combat and ship control systems can be explained. The complexity and speed requirement of modern combat systems demanded computerization from the onset; systems

in-tegration was perceived to be best done through software. From these givens, the combat system designers could leapfrog an intermediate design approach that would have used local dedicated computers, and that would have been prone to equipment proliferation and high logistics cost; a central computer complex approach appeared to offer economies of scale (including, it seemed then, cost reduction), and an increased ability to shift computer loads between tasks, an apparent advantage for casualty control. Reliability, was not the over-whelming consideration. Specific attempts to achieve adequate reliability

are usually made by complex redundant designs using the replication of whole computers. The very decision to standardize on the AN-UYK 7 computer accel-erated this trend.

In ship and submarine machinery and motion control, however, safety and reliability have always been the prime consideration. "Don't lose the ship." Allowance for automation is made sparingly and usually in the context of safety issues involving phenomena occurring at too high a speed for humans

to handle (e.g., gas turbine overspeed). In spite of the gradual introduc-tion of automatic control, a much higher level of reliability in control equipment needs to be demonstrated and conveyed to the users community before machinery and motion control of ship and submarines are turned over to "black boxes". While the need for reduction in life-cycle costs through re-duced manning is indeed drastic in these days of economic hardship, it has not yet met a sufficiently low-risk level of automation technology to

materi-ally impact ship control design.

2. NEW EMERGING NAVAL SHIP CONTROL REQUIREMENTS

Future naval combatant vehicles will need more than maximization of ship availability, a simpler commercial objective. Stringent requirements for reliable equipment and systems operation stem from several facts central to the military mission. However, automation of naval ship control will continue proceeding at a slower pace than in similarly sized commercial ships. Four factors summarized below explain why.

(1) The Military Mission is More Complex--It is impossible to reduce

the function of a naval ship crew to mostly or exclusively maintaining ship systems. The crew has the vital function of

manning the military payloads for strategic or tactical engagements, a requirement commercial ships do not have. The scenarios of en-gagement, and the control of these payloads require a high ship control reliability.

(6)

-2-Automation of Steady-State Conditions Is Easy; -2-Automation of Tran-sients Is Difficult--The essence of the military mission of a mobile platform such as a surface ship or submarine lies within complex sequences of mission-phase "transients" involving changes in the control of vehicle motion, motion rates, and the activation of payloads. While automatic control can be tailored to steady-state conditions with relative ease (as speed and course-keeping for a commercial ship), the naval ship or submarine requires a control system to optimize the scenario transients within safety

limits. For a submarine they are typically: rapid propulsion maneuvering; boat trimming as a function of sometimes rapidly

changing speed; quick diving; rapid but covert approach to the surface, especially in agitated seaways; missile launch in a seaway; variable ballast control; combined trimming and steering

etc. I believe that the "bottleneck" that specifies the complexity of the ship or submarine control system, and eventually its sys-tems level relaibility, lies in the safe transition between these transient mission phases. "Absolute" or ultra-high reliability should be expected of the motion control equipment during these

transitions, since the penalty to be incurred for an equipment fault far outweighs the simpler economic penalties that would be incurred by a commercial platform of similar size (i.e., submarine below collapse depth, or broaching through the surface in wartime. The Human Factor: Ingrained, and Often Justified Risk Aversion The human factor in accepting automation of naval ship control

functions remains dominant. Methods of officer's performance evaluation in peace and wartime probably create a special aversion to having to depend on "black boxes", especially ones that are known to fail more than occasionally.

Wartime Logistics Constraints--From a Logistic point of view, it is not reasonable to exaggerate the "short-time-to-repair" design approach, since it implies the availability of spares and relatively high crew skills, both of which can be in short supply, or cannot be made available quickly enough in a war theater; in contrast, spares can be flown to stricken commercial ships anywhere. A variety of other factors will influence future ship control require-ments in addition to the automation of classical platforms. These include: reliable control of new kinds of payloads; high-performance ships critically dependent on automatic controls such as combatant hydrofoils and surface ef-fect ships; new manned and unmanned submersibles, including encapsulated sub-systems and payloads.

The tradeoff in these future designs is similar to the control configured vehicle (CCV) tradeoffs facing today's aeronautical designers; basic advances in control technologies will permit substantial

vehicle weight savings and improve maneuvering characteristics in amounts otherwise unattainable. In

both cases of the CCV airplane design, and of

the automated naval high-performance ship or submarine, "an act-of-faith" in the control system re-liability must be made by the designer. To justify this act-of-faith, air-plane designers are devoting great energy in the area of "fly-by-wire", re-sorting to digital technology and redundant control configurations.(4)

It is

reasonable to assert that a similar type of activity ought to take place in the ship and submarine design community, starting

with conceptual ship control designs especially tailored to advanced naval vehicles

and the oft mentioned smaller size "miniattack", or special purpose, modularized low-manning sub-marines, or possibly as retrofits to existing platforms.

(7)

-3-Historically, the act-of-faith of the naval architect and marine designer in a reliable ship control system has eventually been adopted whenever no de-sign alternative was available, or when the dede-sign alternative was eventually perceived to be unattractive. Unattended nuclear compartments have, of course, been automated since radiation exposure demands it. Fully-submerged-foil sea-worthy hydrofoil crafts now totally depend on automatic controls, but the les-son was learned only after lengthy R&D in surface-piercing hydrofoils which allegedly did not require a reliable control system, but had poorer seakeeping and maneuvering capability. Finally, when the time constants of the system to be controlled are very short compared with human reaction time, there is no alternative but at least partial automation (gas turbines).

From the above, it appears that a general requirement of future naval combatant platforms is the availability of lightweight, flexible, adaptive, self-maintaining ultrareliable controls.

3. QUANTIFYING FUTURE NAVAL CONTROL SYSTEM RELIABILITY REQUIREMENTS

The inadequacy of specifying ultra-high reliability requirements by the sole use of the mean-time-between-failures (MBTF) parameter has been indi-cated.(5) This method of specification is not sufficient, and should be broadened to include the "mission success probability over a finite period of time". From this, one can calculate the "prorated, hourly reliability". The period of time over which the probability of system nonfailure is guaran-teed should be commensurate with the ship or submarine mission duration, or

the time elapsing between two successive maintenances; for inaccessible parts, periods could typically extend over a few days (search, rescue missions), to a few weeks (ASW missions), or several months (strategic deployment or other covert missions).

A key step in obtaining equipment-level reliability goals is the ap-portionment of the overall system success probability to the various functions and corresponding equipment which contribute to the system. Two general rules seem applicable to the reliability allocation process within a system; first the parts have to be more reliable than the whole, i.e., it seems necessary to allocate to the components and subsystems a reliability factor better by an order of magnitude than the overall system reliability goal. Second,

there must be balance in the way the nonfailure probability is apportioned to the subsystems that are connected together, i.e., one must avoid "design over-kill", or "gold plating" in isolated areas.

In particular, the reliability of electronic, electrical, and sensing portions of the control system must be matched with the reliability of the mechanical actuations.(6) Three examples from a nonship environment might help to illustrate quantitative apportionment of reliability.

(1) The Apollo ComputerAn Apollo mission extended over a period of

one month in the worst case. Nonfailure probabilities ranked as follows.

Overall mission success .99

Apportionment of mission .998

success probability to Apollo computer

It does not quite make sense to talk about a 100,000-hour MTBF computer, es-pecially when redundant internal structures are used. Instead, one should typically talk about a ".99999-nonfailure-probability" computer that has to operate over some much shorter period of time. This is explained in

(8)

-4-The Apollo crew safety goal was .999, and was made purposely in-dependent of the computer. The prorated hourly Apollo computer reliability requirement is of the order of .99999, or a failure rate of 10-5 per hour was deemed permissible. It is interesting to note that the retrospective reliability of .998 for the Apollo computer was calculated on the basis of the following experimental data, covering five different regimes.

No computer failure ever occurred during actual flight.

Air Force Digital Avionics Information Systems (DAIS)(6)--A failure-rate goal of 10-7 failures per hour has been set up, corresponding to a reliability of .9999997 for a 3-hour mission, for a whole fleet aircrafts. This reliability apportionment is, of course, in

the context of relatively short mission durations, with no inflight manual repairs. In the first versions of DAIS, reliability en-hancement through redundancy is needed only in the autopilot

func-tion.

Commercial Avionics Requirements--NASA has issued target reli-ability requirements for automated motion control and landing of passenger transports; the reliability goal is among others, moti-vated by insurability considerations. A reliability requirement for the aircraft motion and landing control system has been proposed, corresponding to a maximum allowable failure rate of 10-10 failures per hour in the digital control system, or a reliability of

.999999999 for a 10-hour flight. It appears that these demanding requirements can only be met by fault-tolerent architecture approach

(see below Section 6).

3.1 A tentative estimate of naval control system reliability requirements The control system reliability problem for the low-manning naval ship or submarine of the next two decades is, of course, quite different from the three examples above. The ship or submarine mission is repetitive, usually of much longer duration than the examples above, and some partial repair during the mission is allowable. However, the principle of deriving prorated equipment reliability requirements to meet an overall mission success prob-ability over a finite period of time is applicable. This approach should be the central focus to define new ship control system requirements and to

initiate new development, rather than, for instance, fragmented efforts at-tempting to reduce computer memory requirements or developing complicated algorithms. One can venture to suggest that a typical prorated reliability goal for ship or submarine control should be quantitively similar to the Apollo one, but at a cost substantially lower than the one incurred by the Lunar mission.

Considering the number of hours in a 4-week ship or submarine mission, and a high premium placed on mission success (maybe of the order of .999),

Aging time 292,000 hours Vibration tests 6,500 hours Thermal cycling 4,200 hours Normal operations 70,000 hours

(9)

the prorated apportioned hourly reliability of the ship or submarine control system could be of the order of .99999 or more.

Experience shows that a large number of nines in the stated system re-liability goal stresses all parts of a system and calls for new advanced technologies, both in components and in the architecture that binds the com-ponents. These technologies envisioned for the automated ship or submarine would be quite different from the ones existing in the usual minicomputers

(even in militarized versions), or the current vintage standard naval

com-puters.

4. CATEGORIZING SOURCES OF SHIP CONTROL UNRELIABILITYSOME REMEDIES To systematically address the ship control reliability problem, it is useful to identify the different kinds of failure sources to which ship con-trol systems are vulnerable. Two kinds of failure sources seem to exist:

Random physical failures WITHIN the equipments. Failures AT THE INTERFACES.

The first type of failure is self-explanatory. The second or "interface" type failure can be either functional, or physical. Physical interfaces often

leading to system failures are the usual connectors, cables, buffer amplifiers, power supplies, data transmitters, displays and manual controls, and ulti-mately, the human operator. Functional interfaces that may lead to a system

failure (this should include failures to sucessfully complete a mission such as near-surface hovering by a submarine without broaching) are "mismatches" present in the system from its very inception due to erroneous design

assump-tions. In the case of the near-surface submarine e,ample,these could be: control surface torquing and lifting requirements; required data processor throughout; processor size and I/O capacity; signal-to-noise ratio assumptions for sensing and transmitting, etc. In some of these "interface type" failure cases, no physical equipment may really "fail", the system is just designed wronp. The only way to prevent failures of this kind is to rely during design on truthful models of the process being controlled (e.g., accurate hydrodynamic and hydraulic models, etc.). This permits one to quantify safely the design margins; "padding" of designs is, of course, often the result of uncertainties In models (e.g., hydrodynamic modelling of a near surface submarine cruising In agitated seaways), or results from a lack of understanding of the operating conditions (e.g., what will the computer load be under partial casualty

condi-tions? Will the partial casualty domino into worse casualties due to insuf-ficient thruput?) Exaggerations of design margin are often justified by "safety".

Finally, a special kind of "functional interface" failures are the system faults caused by software, since software can be conceived of as the glue that cements all the functional requirements of any computer-based control

system.

Leaving aside the important question of reliable hydraulics and control servoactuation (not addressed in this paper), let us consider the remainder of the naval ship and submarine control system, and let us assume that it will heavily use digital technology for signal sensing, processing, transmission, display, storage, etc. In order to combat the occurrence of both kinds of system failures (i.e., random physical failure within the equipment, and

interface types failures), a focused R&D program aiming at upgrading ship con-trol reliability should address the following triad:

(1) New hardware components enhancing reliability.

(10)

-6-Software reliability.

New system architectures, both local and global, which enhance re-liability.

R&D efforts in all three areas seem to be intimately interwined: e.g., it would be a serious mistake to assume "pure software" approaches to reliability enhancement; the availability of certain new key components may dictate new architectures, etc. Let us briefly examine each area.

4.1 R&D in new components enhancing reliability

Component standardization was probably, in retrospect, the single most important step towards reaching high inherent reliability levels in the Apollo computer (exclusive use of TTL logic).(7) Today new components are gradually be-coming available that would fit the entire Apollo computer capacity on a sin-gle microelectronic chip. It is believed that the key to future reliability of digital control systems is the repetitive use of identical basic components in generous allocations, and not necessarily the minimization of the number of such components in ad hoc architectures. It is expected that such devices

(e.g., LSI microelectronics, high speed non-volatile memories, etc.) will be gradually introduced and standardized; mass production is the most important factor for cost reduction; special quality control measures are needed for the military applications. It may take 5 to 6 years to reliably obtain the devices, and probably as long to produce them at low cost.

5. SOFTWARE RELIABILITY: A HIGHER ORDER SOFTWARE APPROACH

Software reliability may well be the Achilles heel of any ultra-high reliability control system. No known technique will absolutely guarantee software reliability

(8,2,3,-

butbut techniques are now known to greatly increase con fidence levels and of course there is a need to drastically reduce cost. Of some 2,000 man-years that went into the acquisition of Apollo flight software, more than 1,000 man years went into software verification.(11) The prodigious difficulties encountered in software verification may have been the most important lesson learned in that development, a lesson now painfully diffusing into Naval development of weapons and ships systems. Software verifiability demands both hardware and software features that must be built into the early conceptual system designs, and requires special

facil-ities.

5.1 Hardware features enhancing software reliability

The following hardware features have been found to reduce software costs, and to enhance software reliability.

Provision of generous computer resources (memory, speed, word length, instruction repertoire).

Simplicity of addressing structure.

Availability of microprogramming (firmware). Availability of floating point hardware.

Hardware fault-tolerance which is transparent to software (see below). Eliminate or restrict interrupts.

Test-cooperative hardware: marker bits, branch protection, histori-cal data storage, event counters.

(11)

-7-5.2 Higher-order software

Higher-order software (HOS) (2,3.9) is a post-Apollo technique aiming at boosting software reliability by a special focus on interface correctness. The basic premise of HOS is an intelligent partitioning of software into modules, and a special emphasis on managing the data and timing interfaces between these software modules. The focus on interface correctness comes from a conviction that no amount of dynamic simulation of a given software package, on either its final computer(s) or on a host computer, will ever absolutely guarantee nonfailure for the myriads of possible combinations of external and internal events that can happen in a system. Furthermore, a careful analysis of Apollo software anomalies has shown that 73% of all re-corder anomalies were due to software-to-software interfaces.0-2) It is

estimated that a corresponding fraction of the overall effort of software verification could have been saved if a safe method had existed then to preempt these interface problens. HOS now provides that method: a set of six axioms to resolve the inherent conflicts between the unavoidable top-down and bottom-up processes taking place during software design and acquisitions.(13) The axioms legislate(2 ) invocations between modules (axiom 1), data access

rights (axioms 3,4), rejection of bad input and output data (axioms 2,5), and the ordering of execution of sibbling modules controlled by the same con-troller (axiom 6). The result of axiom 1 is the existence of a linear control tree, where each module is the controller of the "function-modules" immediate-ly beneath it, or the function of the single "controller-module" immediateimmediate-ly above it. A module thus can be a controller or a function depending on the point of view. Also, all modules of a Higher Order Software tree are con-trol modules, except for the extremities of the tree who are pure functions, i.e. execute arithmetic and algebraic functions without decision logic. In

contrast to conventional structured programming, of which it has all the ad-vantages, HOS explicitely addresses the data and control integrity problems

for real-time, single or multicomputer systems.

The consequences of an axiomatic approach to software interface manage-ment appear extremely attractive: all software interfaces can be verified statically without program execution, i.e. without complex and expensive simulations; the scanning of interfaces for axiom violations can be made manually, semiautomatically, or automatically; it can be done off-line, or in real-time (structuring executive concept(3)). From the few axioms, many theorems can be derived, describing permissible and non-permissible data ac-cess and timing relationships between modules in a multiprogramming, multi-computing (federated), or multiprocessing environment.(10) Since pseudo-modules that exactly exercise the data and timing interfaces can be substi-tuded for real application modules, a software breadboard approach is possible if HOS structuring techniques are used. A software breadboard approach can be specifically geared to allocated and manage the use of scarce system re-sources (core memory, CPU time, etc.), thereby reducing major development risk just as breadboards are used to reduce hardware development risk. Interface

correctness can be achieved early at the requirements level, and a "safe" mech-anism is in place for iterative redesigns. A real potential exists for

auto-mation, since only six axioms need to be checked for violation. Structuring following the HOS format can be automatically documented from source code (this technique is presently used by Space Shuttle(14)). Neglecting the in-terface correctness problems at early or middle stages of software develop-ment, so often the conventional approach, is already alleviated now by the

use of manual HOS techniques. A software specification language based on HOS-axions is now in development,(15) which promises to give a proactive tool for enhancing reliability; the constructs of this specification language will naturally follow the HOS axioms; the use of this new tool will free the system designer to concentrate on performance verification, systems tradeoffs, and

(12)

-8-optimization, without the present risky and expensive manual burden of assur-ing interface correctness.

5.3 On-line restructuring

It is not enough for reliable system software to be error-free in the application and operating system portions. It must also be able to detect, isolate, and recover from errors in the computing hardware, in the transmission channels (data busses, data links), in the subsystems connected to the com-puter(s) (including a human operator), and from drastic stimulation from the environment (e.g., complete power outage). All these system requirements are related to software control issues which HOS makes explicit and specifically manages. An asynchronous approach to the executive program, which proved invaluable for Apollo (as the Apollo 11 moon landing incident showed), appears mandatory for shipboard use. A combination of HOS and an asynchronous execu-tive permits handling temporary overloads by the ability to drop less important

tasks and increasing the execution rate of the critical ones. System recon-figuration for servicing changing real-time mission phases (using mass memory for instance), or for casualty control, is safely and inexpensively afforded

by HOS. An HOS-structured program automatically provides restartability and reentrancy for all modules, provides restarts from random complete power outages, handles slow restarts, and permanent partial failures of a system.

Real-time axiom violation check (structuring exectives) is deemed feasible, and is presently researched( ). The case of multiprogramming, i.e., the use of commonly addressable memory by several CPU's, is potentially of great importance for shipboard use, and appears covered by HOS.

5.4 Interchangeability of hardware and software

The HOS control tree does not distinguish between hardware and software implementation of its modules; neither does it demand that all modules be resident inside one computer. As shipboard system requirements may dictate, certain portions of the HOS tree may be implemented in microcode instead of

software, which could save on CPU timing and erasable core. The HOS methodo-gology is of course applicable to the verification of firmware. A fundamental overall system design approach of the Higher Order Software methodology is to draw a single, but complete, although preliminary, control tree for the

en-tire shipboard control problem at hand. The tree should encompass all the sources and sinks of information (sensors, displays.. .etc.) and all the

con-trol modules and their relationships. A partitioning of the control tree into subtrees is then made, based on system criteria such as the rate of traffic between modules and certain core memory blocks etc. This approach preserves data and timing integrity, and can lead for instance to the imple-mentation of a federated multicomputer system. Simple, low overhead operating systems have been designed following the HOS methodology. A pseudomodule imple-mentation of the HOS tree permits verifying interface correctness and scarce resource budgeting at the requirements level.

5.5 Software verification facilities

Software in a ship control system will probably undergo numerous changes during the early phases of a system test, and during early operational uses (learning curve phenomenon) and during operation and maintenance. But every software change introduced in the system is another potential source of

unrelia-bility. With HOS there is no need anymore to go through agonizing software reverification after every change (the unavoidable predicament in the Apollo days). HOS, and its automated tools, make it reasonable to guarantee that even numerous and fundamental changes to isolated modules of an existing system originally designed under HOS rules will not decrease the reliability of the assembly because of integrity issues.

(13)

A statement level simulation(16) offers as a crucial part of any facility dedicated to the verification of software. The complex issue of computer language selection, and the ability of writing software independent of the eventual computer that will use it, is a continuous struggle. Language stand-ardization, in particular, is a prerequisite for reliability in the cases where large systems are assembled from portions procured through different

contractors. The major issue, however, is not the use of a specific high order language, but the fundamental structuring of software. An HOS approach would permitautomatic verification of interfaces. A dedicated facility for the point of view of overall functional integration of the ship control system; but software verification is an ADDITIONAL, AND DISTINCT STEP requiring its own separate facility.

6.

SYSTEMS ARCHITECTURE: A DISTRIBUTED, FAULT-TOLERANT HIERARCHICAL APPROACH

A primary concern for high reliability in ship control dictates special architectures, both in hardware and software. Currently used architectures

for naval ship control systems are often the result of historical incremental additions to primary configurations that were conceived from previous technology constraints that gradually became obsolete (e.g.. central computer complexes originated from bulky, expensive, second and third generation computers requiring special environmental conditioning).

6.1

Distributed or centralized digital control systems

An issue confronting the Naval systems designers is whether the digital control system ought to be centralized, or distributed. This question has to be resolved on the basis of each application which may have quite different

requirements. For example the Apollo centralized computer approach was the result of a historical development where computer capacity requirements grew from modest initial sizes to an eventual 38,000 word size, without the avail-ability of swapping in and out of bulk memory, which had been eliminated mostly for reliability reasons; the decision to centralize the Apollo com-puter was based on early

1960s

technology. Similarly a recommendation for post-Apollo, Space Shuttle avionics systems made in 1969 strongly endorsed a distributed approach, (17) although the concept finally adopted by NASA was a central computer facility made of bit-by-bit redundant units. Problems of thruput underestimation are now being addressed.

The sheer size of a Naval ship or submarine, where ship control elements are naturally distributed along the length of the vehicle, would militate against a central control computer approach. Ironically, severe software prob-lems are caused in central computer complexes because the operating systems attempts to make it look as if the central computer is actually solely dedi-cated to each task being multiprogrammed. Data and timing conflicts flourish in a hostile software environment. Testing, verification, maintenance and causalty control are difficult. Higher Order Software can help solving the integrity issues, but cannot much reduce the overhead problem. Furthermore, the one central processor acts as a system reliability bottleneck, which is usually alleviated by expensive duplication of whole mainframe equipment.

In contrast the distributed approach permits tailoring the digital con-trol system to naturally existing shipboard partitioning and local conditions (i.e., location of sensors, hull penetrations, control stations, backup sta-tions, individual propulsion units, auxiliaries, power plants, CIC room, bridge, etc.). Even in very small submarines (Deep Submergence Rescue Vehicle.

DSRV) reliability considerations called for partitioning the control functions

(14)

-10-between separate, dedicated computers. (DSRV has three separate computers: one for navigation, one for autopilot functions, and a central supervisory computer.)(18,19) A study(29) comparing the relative advantages of a central computer complex versus the use of local computer(s) dedicated to the ship control function of an attack submarine, indicates that, at equal costs, the latter approach is far superior, mostly from the point of view of software reliability.

6.2 Hierarchical control systems

In addition to having a natural partitioning of ship control elements, Naval ships and submarines also exhibit natural hierarchies, i.e., there exists a command structure allocating the control authority for the activa-tion and actuaactiva-tion of each ship control element, either prime or backup, whether human operators are involved or not. A distributed digital control system permits tailoring to these naturally existing or desired partitioning and hierarchies. This implies numerous levels and sites of autonomy, often with different requirements in processing, bandwidth, environments, size, and reliability. Each autonomous site (or "node") must have some capability of processing information and/or exercising control over some part of the overall system; each site will be capable in principle of communicating with a "superior" site, and one or several "parallel" or "inferior" site(s). Such

a control system approach is called hierarchical. (21) The parallel with Higher Order Software trees (see above) is striking.

At the bottom of the ship control hierarchy of the future automated sub-marine or ship, one expects to find local processors operating at relatively high data rate, with a fast response time, and dedicated to a single major sensor, effector, or display. Examples could be:

Very high throughout signal processors operating on raw sonar data. A string of bit-by-bit redundant microprocessors servicing a pressure-depth gauge near the hull penetration.

A string of microprocessors monitoring and controlling hydraulic plant behavior. Monitoring of pressures, temperatures, etc. will provide trend analysis.

A unit refreshing a CRT, and formulating displays.

At an intermediate level, another small computer might monitor these trends in an Engineering Operating Station (EOS) and compare them with alarm levels of arbitrary severity, with on-line switching commands; the alarm threshold could be set by the central computer at the top of the hierarchy. The central processor of a hierarchical system need to be very reliable and is expected to work at low data rates, to respond relatively slowly, but perform rather sophisticated calculations (as for instance, optimal control commands, on display synthesis to the human operation). The central computer might take the chore of flagging incipient failures (on the basis of trend analysis

con-ducted lower in the hierarchy), or spew out orders for nonscheduled mainte-nance of accessible parts of the system. One can conceive that every major, and even not-so-major shipboard piece of equipment would be "wired" to a

controller which will probe the equipment, report on its health, possibly automatically substitute a standby

in

case of failure, call for maintenance,

etc.

The objective in designing the hierarchical control system is again to attain extreme reliability, and this implies node standardization. The usual

(15)

growth and local changes to the control system performed during the life of the system will be implemented WITHOUT LOSS OF RELIABILITY, by either the classical addition of local capacity to an existing node if it can be done, or by "offloading" the node through creation of another node inferior to it. (Different nodes need not necessarily be separated by large physical distances.' Node standardization will accrue substantial reliability advantages, including software reliability, since a format, a language and a structuring approach applicable to all nodes will be used. Software verification is simplified by the hardware partitioning, and, of course, Higher Order Software, which should be used, is fundamentally based on the premise of hierarchy. 6.3 Fault-tolerant communication between ship control elements

A possible reliability drawback of a distributed hierarchical data management system for ship control is the need for data transfers between the

nodes. This problem falls under the general need for reliable shipboard in-formation transfer systems, an area already addressed by current Naval R&D in multiplexing. There is a need for a transfer system that survives local link failures, or even failures or destructions of ship control nodes being linked thru the net. Of at least four available design alternatives(22,23)

(i.e., (1) dedicated connections, (2) data bussing, (3) passive or lossy net-works, and (4) active data network), the active network approach is recom-mended. The ability to reconfigure ship control to at least a de2raded mode, should be presented by a sufficiently rich topology of the active network. The "active Nodes" provide damage and fault-tolerance. The "elements of ship

control are tied with efficient interconnections at low power. The use of electro-optical components is not incompatible with the concept. Self diag-nosis and self-repair have been demonstrated. Strategies of network management involve a small amount of hardware at each node, and software

resident in the control processor managing the net. Additional advantages perceived to exist include all the good features of classical data transfer by multiplexing, a lesser vulnerability to common mode failures, and adequate signal-to-noise behavior. The hierarchical design of the local processors would prevent sending signals of unnecessarily high bandwidth through the

active network. Local processors thus act as bandwith transformers. The

hierarchy of the network is software controlled. Casualties from failure or damage at any part of the hierarchy could be handled by reconfiguring the hierarchy; any node with sufficient computing power available has the

potential of assuming network control, thereby preserving ship control. A

Higher Order Software approach appears to be essential for structuring the control software.

6.4 Fault-tolerant computers

One cost-effective approach to meeting ultra-high ship control reli-ability goals, either for local or central digital processors, would be the use of fault-tolerant computerrwhen they become available. A fault-tolerant computer permits internal random component failures to occur within itself

.

without loss of computational continuity(24,25,26), self-repair occurs within certain limits thanks to internal redundancy. Fault-tolerance computation research is now moving away from the stage of laboratory curiosity, but the use of redundancy per se is full of pitfalls. Vital systems questions are: How to detect a failure? How to switch a spare? How to test the integrity of standbys? Whether to operate synchronously or not? How to manage the internal data busses?, etc. A recommended concept of a fault-tolerant com-puter uses the following: (26,27)

Separate replication of elements (CPUs, memories, I/Os, busses, power supplies)

(16)

-12-Bit-by-bit comparisons of strings of such elements, with majority voting.

Unassigned standby spares, switched on by software. Fault-tolerant clocking.

Complete transparency to the application programmer (i.e., the pro-grammer does not have to be aware that the computer he is using is internally distributed, or that bit-by-bit comparisons are going on). The strings of devices (CPUs, memories, etc.) comprise N>3 elements, N being defined by the reliability goal to be achieved in the hierarchical

sys-tem.(6,28) This concept of fault-tolerance has evolved over the years since 1966

The software implications of a fault-tolerant multiprocessor computer constitute an important area of R&D in itself,(29,30) and preliminary results exist.(31) The methodology of HOS appears applicable to both the operating system and the transparent application software resident in a fault-tolerant computer.

6.5 Capacity allocation and reliability

It is felt that the kind of fault-tolerant philosophy that must be pur-sued, should be based on the premise of generous hardware allocations

per-mitting the use of many identical devices; this will permit maximum benefits from the learning curve phenomenon, an absolute prerequisite for achieving reliability. This is in contrast to philosophies that attempt to minimize the number of devices, and which lead to awkward architectures, more diffi-cult standardization and isoteric components and subassemblies. This is a crucial issue, since a miser's attitude in early hardware capacity allocations (memory, I/O's, etc.) can be identified as the major factor for cost escala-tion of numerous DOD systems. (32) Insufficient initial hardware allocation has led to extreme packing and connectivity, which caused very high software costs and sometimes the need for later hardware additions. (Software costs are now, of course, higher than computer hardware costs.) The overall cost escalation, unfortunately, is not necessarily, and usually is not accompanied by better reliability, since hardware additions create more interfaces to manage, and very dense software coding is not easily amenable to convenient verification or reverification.

The advent of "fourth generation" hardware should completely relax the need for "saving" on computer devices, but only if resonable architectures are adopted; both are needed to allow high reliability at low overall life cycle cost.

7. CONCLUSIONS AND RECOMMENDATIONS

Ship control systems of the future will heavily depend on digital tech-nology for ultimately attaining very high reliability. It is believed that

the concepts covered above will help meeting at lowest cost the complexity and high reliability requirements of future Naval ships, both in the control of classical areas such as propulsion, prime movers, auxiliaries, ship maneu-vering, active seakeeping control by autopilots, etc., but also of extremely sophisticated new mission payloads. These future requirements naturally lead to a distributed, hierarchical network of standard high reliability nodes ex-hibiting local fault-tolerance, tied by communications paths of equally high reliability also exhibiting damage-and-fault-tolerance. For new applications

(17)

-13-the concept should be planned from -13-the outset of ship design, but a good po-tential for retrofits exists thanks to the flexibility of the distributed-hierarchical approach, and of presently existing Higher Order Software tools.

It is believed that the present widespread use by the U.S. Navy of Cen-tral Computer Complexes is only a transitory stage, and that Naval ship con-trol and weapon systems of the future will attain higher reliability at lower cost by using the distributed/hierarchical control approach. Fault-tolerance will be mandatory, but it cannot be implemented as an afterthought by fitting existing pieces together.

Software is believed to remain a grave problem. Proliferation of pro-prietary approaches will not help. Higher Order Software and its potential, and the current Navy plans to standardize languages, and methodologies are steps in the right direction. Ultimate software verification requires early attention in design, and special facilities. It is not yet feasible to factor quantitatively this software contributions to overall ship control system un-reliability.

Finally, the decision to force uses of standard AN/UYK 7 and AN/UYK 20 computers for Naval tactical applications is believed to have been a very useful stopgap measure; however, indefinete exclusive commitments in the future to these computers in inventory might prevent one from satisfactorily solving the long term high reliability problem, since these two computers cannot be all things to all users, and the fault-tolerance problems requires special planning.

The needed focused R&D must attack simultaneously three intimately intertwined areas: components and control devices, software, and new architectures, three areas fraught with uncertainties. Some technology transfer from commercial advances will help meeting the military objective, but probably only in the hardware component areas (i.e. microelectronic, filter optic links, etc.); the software and architectural issues are very specific to military needs.

The need for very reliable control systems as a prerequisite to ship and submarine automation is a foregone conclusion. Although the automation need is already almost universal for the current new designs of the U.S. surface fleet (1975) and is felt by the new submarine acquisitions the need will be even more exacerbated for the new designs of U.S. Naval vessels and retrofits of the 1980s and 1990s. Ship design and acquisition managers of future years must be given certified, low risk components and proven systems concepts "on the shelf" to meet the tough demands of their new vessels. The time horizon up to engineering development implied by the con-cepts described here, is at least 10 years or more.

(18)

-14-REFERENCES

Some of the arguments on the advantages of digital technology have been summarized in "A Digital Autopilot for a Hydrofoil Craft"; E. A. Nord-strom, F. S. Gamber, P. G. Dogan; C. S. Draper Laboratory Report R-722; June 1972.

Fraser, D. Felleman, P., "Digital Fly-By-Wire: Computers Lead the Way"; Astronautics and Aeronautics AIAA; July/August 1974.

Bouricious, Carter, W.C., Schneider, P. R., "Reliability Modeling Tech-niques for a Self-Repairing Computer System"; Proceedings for the 24th National Conference of the Association of Computing Machines; 1963. Hamilton, M., Zeldin, S., "Principles on Higher Order Software Illustrated by application to a Space Shuttle Prototype Program", C. S. Draper

Laboratory Report R-790, February 1974.

Hamilton, M., Zeldin, S., "Higher-Order Software-Methodology for Defining Software", C. S. Draper Laboratory Report R-862, March 1975.

Yin Allen; "Digital Avionics Information Systems (DAIS)", Final Report, Flight Control System Reliability Task"; C. S. Draper Laboratory Report R-816, September 1974.

Hall, E.; "MIT's Role in Project Apollo", Final report of contracts NAS 9-135 and NAS 9-4065, Volume III, Computer Subsystems, C. S. Draper Laboratory Report R-700, August 1972.

SIGPLAN Notices, Vol. 10, June 1975, Proceedings on the International Conference in Reliable Software, 2-23 April 1975, Los Angeles, Calif. McCoy, B., "DAIS Avionic Software Development Techniques," C. S. Draper Laboratory AIAA Paper.

Boetje, G., "Managing Software Development: A New Approach," C. S. Draper Laboratory.

Hamilton, M., "Management of Apollo Programming and its Application to the Shuttle," C. S. Draper Laboratory Software Dhuttle Memo

No. 29,

May 1971.

Hamilton, M., "Design of the Guidance, Navigation and Control Flight Software Specrification," C. S. Draper Laboratory Report C-3899, February 1973.

Hamilton, M., Zeldin, S., "Top-down, Bottom-up

Structured Programming and Program Structuring," Rev. 1, C. S. Draper Laboratory Report E-2728, December 1972.

Daley, W., "Automatic flowcharts", C. S. Draper Laboratory, Mercury Memo No. 53, March 1974.

System Specification and Design, Preliminary

Report on System Specifica-tion and Cataloguing, 8 August 1975 (Tentative

unpunished) Naval

Electronics Laboratory Center, San Diego, Technical Note TN-3031, NELC 0229.

(19)

-15-Boucher, R., et el., "Users Guide to the C. S. Draper Laboratory Statement Level Simulator," C. S. Draper Laboratory Report R-799, July 1975 (Rev. 1). "STS Data Management System Design, Task 2"; C. S. Draper Laboratory Report E-2529, June 1972, (several authors).

Decanio, F., Dogan, P., "Analysis and Design of the DSRV Ship Control System", C. S. Draper Laboratory Report R-710, April 1972.

Dogan, P., et al.; see Chapter 2 of C. S. Draper Laboratory Report R-671, "Simulation of the Deep Submergence Rescue Vehicle"; June 1972. Lawson, R., Dogan, P., Saul Bojarski, "A Ship Control Computer Trade-off Study for the Cruise Missile Submarine, C. S. Draper Laboratory Report R-743, May 1973.

Hopkins, A. L., "Hierarchical Autonomy in Spaceborne Information Processing", C. S. Draper Laboratory Report P-150, Cambridge, Mass., February 1975, Presented at the IFAC/75 Sixth Triennial World Congress, Boston/Cambridge, Massachusetts, August 24-30, 1975.

Smith, T.B., "Damage Control Mechanisms in Digital Communications Net-work for Distributed Real-Time Control Systems", presented at the 1975 IEEE International Convention & Exposition, April 8-10, 1975, New York, 1975 IEEE Intercon Conf. Record session 11, p. 1. Smith, T. B., "A Damage-and-Fault-Tolerant Input-Output Network",

in Dig. 4th International Symp. on Fault-Tolerant Computing, IEEE Computer Society, June 1974.

Hecht, H., Editor, "A Fault-Tolerant Multiprocessor", Collected Papers on Fault-Tolerant Spacecraft Computer Technology, The Aerospace Corp., Aerospace Report #TR-0172(2315)-2, Los Angeles, Calif., March 1972, pp. 183-210.

"A Fault-Tolerant Information Processing System for Advanced Control, Guidance, and Navigation", C. S. Draper Laboratory Report R-659, May 1970.

Smith, T.B., "A Highly Modular Fault-Tolerant Computer System", Ph.D dissertation, Aeronautics & Astronautics Dept., MIT, Cambridge, Mass, November 1973.

Hopkins, A.L., Smith, T.B., "The Architectural Elements of a Symmetric Fault-Tolerant Multiprocessor", in Dig., IEEE 4th International Symp. on Fault-Tolerant Computing, Univ. of Illinois, Urbana, June 1974.

Lala, J.H., "A Cost and Reliability Model of Partitioned Digital Systems", C. S. Draper Laboratory Report R-573, Cambridge, Mass., February 1973. Rosenburg, S.C., "An Executive Program for an Aerospace Multiprocessor", C. S. Draper Laboratory Report T-552, Cambridge, Mass., September 1971. Weinstein, W.W., "Correlated Malfunctions in Redundant Systems", C. S. Draper Laboratory Report T-581, Cambridge, Mass., September 1972.

Weinstein, W.W., "Software Supplemented Error Detection and Recovery Tech-nique for an Avionics Control System"; (CSDL Report R-781), December 1973. Boehm, B., "Some Information Processing Implications of Air Force Space Missions: 1970-1980"; Memo RM-6213-PR, Rand Corp., January 1970.

(20)

-16-CURRICULUM VITAE

(21)
(22)

J. VAN AMERONGEN

Was born in Veenendaal, The Netherlands in 1946.

In 1971 he graduated in Electrical Engineering at Delft University of Technology, Delft, The Netherlands. During his military service in the Royal Netherlands Navy he worked at mathematical modelling of ships and at the development of an adaptive autopilot. At present he worls

on the staff of the control laboratory of the Electrical Engineering Department, Delft University of Technology. His current interests are

ship control systems and electric power systems.

A.D. APPLETON

Born in Surrey in 1928, educated at Grammar schools

in Manchester and Essex; Queen Mary College, University of London gaining B.Sc. (Hans) degree in electrical engineering. Awarded State Scholarship. With the General Electric Co. Ltd. from 1953 to 1964 on power equipment and nuclear reactors; attached to AERE Harwell for 5 years on control of heavy water reactors and the engineering associa-ted with thermonuclear research. Joined IRD in 1964 and became respon-sible for superconducting machine project in 1965; made Head of Elec-trical Engineering Department in 1968. Present responsibilities include superconducting a.c. generators, superconducting d.c. motors and gene-rators, superconducting magnets, current collection, special purpose motors, e.g. for propulsion of small submersibles, fusion activities, MHD, and a wide range of sponsored research topics. Serves on a num-ber of committees for Ministry of Defence, Science Research Council, British Cryogenics Council (Vice Chairman).

E.G. ARNOLD

Training Mechanical engineering apprenticeship with the Imperial. Chemical Industries.

Cadet student training with the British Ministry of Defen-ce (navy).

Qualification: BSc (Mechanical engineering) first class honours. Professional

Experience : He is working at the Ministry of defence (procurement

executive), Ship department and is responsible for the interpretation of naval requirements into main propulsion and auxiliary machinery designs for future warships and the examiniation of alternative MCY systems for ships contained in the ship programme. Other responsibilities include the examination of the effects of energy shortages on warship design, and the monitoring of warship designs of foreign navies.

W.J. ARRIENS

W.J. Arriens, born in 1947, entered the Royal Netherlands Naval College (branch: supply) after completing Highschool in 1967.

He obtained his officers diploma iii 1970.

At the present he is the secretary of the; Flagofficer of the Royal Netherlands Naval College.

(23)

G. ASTORQUIZA VIVAR

Gustavo V. Astorquiza was born in ConcepciOn, Chile, on June 20, 1944. He graduated from Chilean Naval Academy in 1963. In 1968 he specialised in Gunnery and Fire Control in the Ordnance School of the Chilean Navy. He received the BS and MS in Electrical Engineering from Naval

Postgra-duate School, Monterey, Ca., USA, in 1973 and 1975, respectively. He did most of his research in Ship Control.

At present Mr. Astorquiza is working at the Engineering Department of the Bureau of Weapons of the Chilean Navy and acting as Faculty Member of Universidad Tecnica Federico Santa Maria, Valparaiso, Chile. Mr. Astorquiza is IEEE Member since 1973.

T.C. BARTRAM

Born in Yorkshire in 1946, Mr. Bartram is a

gra-duate of Bradford University gaining the degree of B.Tech.Hons in elec-trical and electronic engineering. Mr. Bartram was a student apprentice with the Yorkshire Electricity Board before doing post graduate research

in power system stability. On joining IRD in 1969 as a design engineer in the Electrical Engineering Department, Mr. Bartram worked on the control and Electrical support systems for the prototype superconducting ship propulsion machinery. Mr. Bartram rose to the appointment of Group Leader in 1972, and is now responsible for electrical design, control and instrumentation in connection with the superconducting d.c. machines and systems in addition to other topics not related to superconductivity.

A.D. BEARY, Jr.

He is working in the Automation and Control Division of the David W. Taylor Naval Ship Research and Development Center, Annapolis.

H.A.R. BEESON

Joined Royal Navy in 1953 as Artificer Apprentice. Served as Engine Room Artificer in HM Ships:

ADAMANT, BLACKWOOD, SOLEBAY, PUMA, GREATFORD and EASTBOURNE.

Promoted Chief Engine Room Artificer in 1968, subsequently serving in Ship Maintenance Authority and HMS GLAMORGAN from which he was promo-ted commissioned rank in November 1970.

Served in HMS FEARLESS until joining HMS SULTAN in May 1973. Until recently, he was Pre-Joining Training Course Officer for the AMAZON/SHEFFIELD Classes. Currently preparing Pre-Joining Training

(24)

F.J. VAN DEN BERG

The author was born on 21 July 1946 at Nijkerk, Netherlands. In 1963 he started at the Technological University of Delft to study mechanical engineering. During the course he specialized in 1966 in measurement and control systems. In June 1970 he received the ir-degree equivalent to Master of Science.

After that he had spent 2 years in military service as a projectcoOr-dinator in military development work.

In April 1972 he became employed at "Koninklijke Maatschappij De Schelde" a member of the Rhine-Schelde-Verolme Group as applicator engineer dealing with ship propulsion systems especially in the measurement and control field.

W.B. VAN BERLEKOM

Education : Graduated from the Royal Institute of Technology (at

Stockholm) Department of Naval Architecture in 1959. Employments From 1959 to 1963 at a consulting engineering firm

working on hydrodynamic problems in connection with a submarine project.

One year (1964) at the Research Institute of National Defence (in Stockholm) working on hydrodynamic problems. From 1965 employed at the Swedish State Shipbuilding Experimental Tank (SSFA) at Gothenburg and engaged in research and development work in ship hydrodynamics as boundary layer problems, hydroacoustics (boundary layer noise, cavitation noise), manoeuvring and control of ships etc.

R.K. BERNOTAT

Professor Bernotat is director of the Research Institute for Anthropo-technic (FAT), which currently is the largest research institution in

the field of Human Engineering in the Federal Republic of Germany. He received his diploma in electronics in 1959 from the Technical Univer-sity of West Berlin where he subsequently held a position of research assistant at the Institute for "Aircraft Guidance and Control". After receiving his Dr. eng. in 1963 he became a member of the teaching staff, giving lectures in flight instrumentation and Human Engineering (anthro-potechnic). Since 1967 he is responsible for the build up of the research institute (a non profit organization, financed by the German government) and is now teaching Human Engineering at the Technical University of Aachen.

He is member of the German Society for Aeronautics and Astronautics, was 6 years chairman of the working group for Anthropotechnic in this

society, is member of the German Society for Ergonomics and ofthe

(25)

R.E.D. BISHOP

Professor R.E.D. Bishop became a student at University College London after service in the Royal Naval Volunteer Reserve during the war. After periods in industry, in de U.S.A. and in Cambridge University he returned to University College as the Kennedy Professor of Mechani-cal Engineering in 1957. He has remained there ever since.

W.J. BLUMBERG

WALTER J. BLUMBERG received his BSEE degree in 1950 and his MSEE degree in 1955, both from the University of Maryland and has taken additional graduate courses from the American University, plus special courses in computers, control theory, and engineering management. He is currently Staff Assistant to the Automation and Control Division of the David W. Taylor Naval Ship Research and Development Center, Annapolis Labo-ratory. Since 1964, he has held research and development positions at the Annapolis Laboratory as Senior Project Engineer, Program Manager, and most recently, Head of the Control and Simulation Branch. During this period, he continuously promoted programs in surface ship advanced bridge control systems and displays, ship control systems automation and computer applications, and the use of analog, digital, and hybrid real-time computer simulation as a research tool. From 1950 to 1964,

Mr. Blumberg worked in private industry in various responsible research and development positions from Computer Designer of Engineering (ACF

Electronics, Melpar, American Electronic Laboratory). He designed aircraft operational flight simulators, motion and visual effects si-mulation, weapon systems and electronic test equipment; did research on aircraft control systems; and managed major simulator and recon-naisance programs from design to deliverable equipment. He has made presentations to technical societies, and written numerous reports and papers on simulation and control. He is a member of the Society of Naval Architects and Marine Engineers (SHAME), the American Society of Naval Engineers (ASNE), the international Association for Analog Com-putation (AICA), the Association for Computing Machinery (ACM), Society for Computers Simulation (SCS), the Institute of Electrical and

Elec-tronics Engineers (IEEE) and its Societies. He was chairman of the First and Second Ship Control Systems Symposia, and U.S. Coordinator

of the Third and Fourth Ship Control Systems Symposia. He has been a member of numerous government and government/industry committees and workshops on ship machinery and controls. He is a member of the Phi

Eta Sigma Honorary Fraternity.

R.G. BOITEN

R.G. Holten graduated from Delft University of Technology in 1945 and obtained his ir.-diploma.

Until 1959 he was employed by the Institute T.N.O. firstly as research-engineer and at last as Director of the Institute T.N.O. Mechanical con-structions.

Since 1959 he is full professor in Control Engineering at Delft University of Technology.

(26)

T.B. BOOTH

Scholar and graduate of Trinity College, Cambridge.

Post graduate studies at the College of Aeronautics, Cranfield (now

the Cranfield Institute of Technology)

Industrial experience at Bristol Aircraft Limited and Rolls Royce

Limited.

Service in the Royal Navy.

Joined the Royal Navy Scientific Service (now part of the Ministry of Defence) in 1956, serving in the Admiralty Gunnery Establishment, the Admiralty Surface Weapons Establishment and the Admiralty Experiment

Works.

At present carrying out the duties of Chief Scientist, Admiralty Experiment Works.

C.J. BOYD

Captain Carl J. Boyd was commissioned as ensign on June 9, 1945, after completing Midshipman School at the University of Notre Dame.

He was ordered to the destroyer USS JOHN W. WEEKS, and remained in des-troyer assignments until 1950. At this time, he enrolled in the U.S. Navy Postgraduate School, Monterey, California. He graduated in June

1953 with a Master's Degree in Engineering Electronics.

After tours on an aircraft carrier and the guided missile cruiser, USS BOSTON, Captain Boyd returned to destroyer duty as Executive Officer of the USS HANSON. Following a tour as Executive Officer of the guided missile frigate, USS COONTZ, he reported to the U.S. Naval War College at Newport, Rhode Island. He subsequently served with the Development Program Division of the Office of the Chief of Naval Operations and commanded the guided missile destroyer, USS WADDELL. In February 1966, Captain Boyd reported as Commander, Destroyer Division Twelve. He ser-ved in the Naval Ordnance Systems Command, Washington, D.C., as the Project Manager of the Mk-46 torpedo prior to assuming command of USS SPRINGFIELD.

Captain Boyd assumed his present duties as Project Manager of the Surface Effect Ship Project in

December

of 1971.

F. BOUTHELIER

(27)

J. BRINK

The author was born on 6 December 1938 at Vaikenswaard, Netherlands. He became a Marine Engineering Officer in September 1960 after a three years course at the Royal Naval Academy at Den Helder.

The next six years were spent at sea in Her Majesties Destroyers, fol-lowed by three years service at the Royal Naval Academy. In 1969 he started at the Technological University of Delft, from which he recei-ved a degree equivalent to Master of Science, in February 1973. After one and a half year at sea, he is now attached to the office of the Director of Naval Mechanical Engineering at the MOD (N) in the Hague, as an assistant project engineer dealing with automation and controls.

J.C. BRINKMAN

Mr. Brinkman graduated in 1967 from Delft University of Technology in electrical engineering.

Currently he is lecturer in electronics at the Royal Netherlands Naval College.

G.D. BUELL

B.S., Aeronautical Engineering, Texas A&M University, 1956

M.S., Mechanical Engineering, University of So. California 1965. Ph.D., Electrical Engineering, University of California at Los

Angeles, 1969.

20 years of experience in Aerospace and Marine Systems engineering including 5; years as a pilot in the USAF.

Presently Manager of the Washington Engineering Operations Depart-ment, Marine Systems Division, Rockwell International.

W.H.P. CANNER

Professional Qualifications C. ENG Chartered Engineer.

M.I.E.R.E. Member of Institute of Electronic and Radio Engineers. M.R.I.N. Member of Royal Institute of Navigation.

Fl.'Nav. Qualified Flight Navigator.

Present Position

Lectures on Ship Control Systems in the Department of Maritime Studies at the University of Wales Institute of Science and Technology. Career to Date

1944 - 1956 Trained as electrical engineer in the R.A.F., volunteered for flying training, and served as a flight navigator with Bomber Command.

1956 - 1960 On retiring form the R.A.F., moved into civil aviation as a lecturer with British Airways teaching aircraft control

systems.

1960 - 1961 Spent a period in industry on auto-land devices for

air-craft with Smiths Instruments Ltd.

1961 - 1975 Spent the last fourteen years as a lecturer in electronics with a particular application on ship control systems in

the Department of Maritime Studies at the University of Wales.

(28)

H.J.S. CANHAM

Mr. Canham is working at the Admiralty Experiment Works, Haslar, U.K.

J. CARLEY

Dr. J.B. Carley joined the Ministry of Defence (Army Dept) as a student apprentice and gained an Honours degree in mechanical engineering at the City University London. He was awarded an MOD scholarship to read for a higher degree at the University of Birmingham in Fluidics for which he received his Ph.D. He joined the Admiralty Engineering Labo-ratory in 1970 as a Senior Scientific Officer to work on simulation and control of ship machinery systems. He is now a Principal Scientific Officer and currently engaged in the application of Identification Tech-niques and Control System Design to ship motion control and ship propul-sion and auxiliary machinery systems.

P. CHADWICK

Function : Responsible for the control and co-ordination of trials with particular reference to the system performance aspect. Educated at Bolton School Lancashire. Joined de Havilland Propellers as an Engineering Apprentice in 1959. Employed in the performance

sec-tion of Hawker Siddeley Dynamics working on preliminary design studies of a wide variety of control systems including marine steam boilers, gas turbines for aircraft and automatic gear boxes for heavy road ve-hicles. Transferred to the Marine controls department of Hawker Siddeley Dynamics Engineering and was involved with the development of the Ma-rine propulsion control system. Appointed leader of the Trials and Per-formance section with direct responsibility for Sea Trials.

Obtained an Honours Degree in Mathematics at Hatfield Polytechnic.

A. CHAIKIN

He is appointed Research and Development Program Manager of the Naval Sea Systems Command, Washington.

J.P. CLELAND

Born 2nd November 1942. BSc (Electronics) 1967. PhD (Multivariable Con-trol Systems) 1971 - both at University of Strathclyde. Student/Graduate Apprentice with British Steel Corporation 1960-1967. Joined Y-ARD in November 1970. Now employed as Consultant - Simulation and Systems Ana-lysis Studies.

(29)

J.E. COOLING

J.E. Cooling: born in Dublin, Eire, in 1942. He joined the Royal Air Force from school and served both in the U.K. and the Middle East. The final two years of service was as an electrical instructor at an R.A.F. School of Technical Training. From this he entered Loughborough Univer-sity of Technology and subsequently graduated with a Batchelor of Science degree (honours) in electrical and electronic engineering. He then worked for the British Aircraft Corporation on the electrical de-sign of flight control systems. He is currently employed in the con-trol systems dept of Marconi Radar Systems Limited (Leicester), spe-cialising in the design and development of naval electronic control

systems.

R.J.L. CORSER

LT CDR CORSER did his engineering training at the Royal Naval Engi-neering College Plymouth, gaining an external London University de-gree in Mechanical Engineering, and on the Advanced Course at the Royal Naval College, Greenwhich. He saw service in Her Majesty's

ships NUBIAN, EAGLE and MINERVA before joining the ships staff stan-ding by HMS AMAZON whilst builstan-ding from 1972-73 as the Marine

En-gineering Trials Officer. He then undertook the first of class eva-luation trials during the first years service of HMS AMAZON up to February 1975. He is now a member of the Section responsible for machinery control and surveillance systems within the Ship Depart-ment of the MOD ProcureDepart-ment Executive, Bath.

G.B. COVENTRY

Educated at Glasgow Academy and The University of Strathclyde, gradu-ating B.Sc. Honours in Electrical Engineering in 1967

After graduating, joined Rolls Royce (1971) Ltd., Bristol Engine

Di-vision to work on the development and testing of digital control schemes for gas turbine engines.

Joined Y-ARD in 1969 and is involved in simulation, trials and data processing activities related to ship propulsion machinery; now responsible for the trials data collection and analysis activities.

W.E. COWLEY

Dr. Cowley has worked in the Department of Mechanical Engineering, University College London.

Cytaty

Powiązane dokumenty

Jest chyba oczywiste, że obsługa prawna trzech lub czterech jednostek gospodarki uspołecznionej, która się sprowadza do zastępstwa prawnego przed sądami i

Jest rzeczą niezm iernie charakterystyczną, że datujące się od n ajdaw ­ niejszych czasów prym ityw ne definicje gleby wywodzące się praw ie w y­ łącznie

Książka Mikołaja Mazanow- skiego (z 1900 r.) mimo zalet jest już przestarzała wobec ogłoszenia korespondencyi poety. 1831, pozostaje ostatnim wyrazem naszej

Dlatego 7 stycznia 1919 roku kierownik Ministerstwa Spraw Wojsko‑ wych płk Jan Wroczyński w sporządzonym przez siebie raporcie do Naczelnego Wodza Armii Polskiej zwracał się

Ten aanzien van de proevenserie met verlopende waterstand kan worden opgemerkt dat in totaal 8 proeven zijn uitgevoerd teneinde de invloed van de vorm van het strandprofiel

Jednak August II nie zapo- mniał, że to Jakub Sobieski był jednym z kandydatów najbliższych tronu, i nigdy nie przestał obawiać się królewicza.. Sobieski dostał

waarvoot- de bijbehófende

In addition to the three series of regular wave tests, ballast condition II, with a forward draught to ship length ratio of 0.0155, was model tested over a speed range between