• Nie Znaleziono Wyników

The development of a man-machine interface for space manipulator displacement

N/A
N/A
Protected

Academic year: 2021

Share "The development of a man-machine interface for space manipulator displacement"

Copied!
8
0
0

Pełen tekst

(1)

T H E D E V E L O P M E N T O F A M A N - M A C H I N E I N T E R F A C E F O R S P A C E

M A N I P U L A T O R D I S P L A C E M E N T T A S K S

E r i c F . T . B u i ë l

Delft University of Technology, Department of Mechanical Engineering and Marine

Technology, Laboratory for Measurement and Control (Man-Machine Systems Group),

Mekelweg2, 2628 CD Delft, The Netherlands. E-mail: E.Buiel@wbmt.tudelft.nl

Abstract: A space manipulator is a lightweight robotic arm mounted on a space station or a spacecraft. I f the manipulator is manually controlled from a remote location (teleoperation), the human operator can only see its movements i n the pictures from the caméras mounted on the manipulator and the pictures from the caméras installed in the neighbourhood o f the manipulator. Typical tasks o f space manipulators are transportation tasks and inspection tasks. During the exécution o f these displacement tasks, the human operator has to be alert not to cause a collision between the manipulator limbs and objects i n the environ-ment. This is not an easy job, because the distances between the manipulator and the objects can hardly be estimated from the available camera pictures.

Recently, at our laboratory, a conceptual man-machine interface has been developed for space mani-pulator displacement tasks. The new anthropomorphic European Robot A r m ( E R A ) served as a référence in this project. A t the control-side o f the interface, a force-activated control device w i t h six degrees-of-freedom (the Spaceball) is applied to control the movements o f the E R A end-effector (the hand o f the manipulator). A t the display-side o f the interface, a single camera picture is shown: the picture from the camera, that is mounted near the E R A elbow joint. T o assist the operator i n deriving spatial information from the elbow camera picture, a graphical camera overlay is added: the Raindrop Overlay. In this graphical overlay, the actual distances between the manipulator and the objects i n the environment are visualised b y means o f raindrop-shaped distance lines.

Keywords: Teleoperation, M a n u a l Control, Collision Avoidance, Graphical Displays

1. I N T R O D U C T I O N for tasks that are not w e l l defined i n advance (e.g. repair tasks). Here, the inventiveness o f the human operator is

1.1 The manual control of a space manipulator

A space manipulator is a lightweight robotic arm moun-ted on a space station or a spacecraft. There, it performs inspection and maintenance tasks, e.g. the repair o f a damaged satellite. Figure 1 shows an example o f a space manipulator: the European Robot A r m E R A1 (van

Woer-k o m et al, 1994; Traa, 1995). This fully symmetrie manipulator with two grappies (the end-effectors) is meant for the new International Space Station (ISS; D o o l i n g , 1995). It is approximately ten metres long and w i l l be able to w a l k across the station b y m o v i n g an end-effector from its actual base point to a new one; alter-nately with the first and the second end-effector.

Récurrent tasks o f a space manipulator (e.g. the replace-ment o f Orbital Replaceable Units containing scientific experiments) m a y w e l l be automated and performed under supervisory control. This does not seem plausible

The European Robot Arm is developed at Fokker Space B.V. in

(2)

required more often. Then, teleoperation (Sheridan, 1992) seems a suitable control method. W i t h this met-hod, the human operator controls the manipulator by hand frorn a remote location (e.g. a space station's manned module or a ground station on earth). Astronauts do not have to go outside their spacecraft to control the manipulator; the manipulator movements are controlled with the help o f the pictures from the caméras installed

in the neighbourhood o f the manipulator and the pictures

from the caméras mounted on the manipulator itself. The mentioned teleoperation task is a hard j o b for the human operator. First, the lack o f spatial information i n the available camera pictures complicates manual con-trol. Besides, task exécution suffers from the manipula-tor dynamics: because o f the lightly constructed limbs, the manipulator w i l l be flexible. Finally, when the operator controls the manipulator on earth, time delays are introduced in the control loop. These delays are caused by the transmission o f control signais from earth to space, and back again.

A t the Delft University o f Technology ( D U T ) , the man-ual control o f a space manipulator at a remote location is an object o f study (Bos, 1991). The research is aimed at the development o f a conceptaal man-machine interface ( M M I ) , that can diminish the three problems mentioned above as much as possible. Elements o f the interface are implemented and tested in a simulator (see Figure 2). In this simulator, the movements o f the European Robot A r m E R A are simulated b y means o f a Silicon Graphics graphical Workstation (©). This computer animâtes simplified camera pictures o f the E R A movements and additional information displays (©) in real time. Sub-jects control the movements with a Spaceball® control device (®): a force activated control device with six degrees-of-freedom ( D O F ' s ) .

Generally spoken, the activities of a space manipulator can be subdivided in two elemental tasks: the positioning task and the displacement task. Breedveld (1995a and 1995b) has developed the D U T interface for the positioning task. This paper w i l l focus on the develop-ment o f the D U T interface for the displacedevelop-ment task.

1.2 The space manipulator displacement task Transportation tasks and inspection tasks are typical displacement tasks. D u r i n g the exécution o f these tasks, the manipulator covers large distances. Then, the human

F i g u r e 2 The simulation facility

(© graphical Workstation, © animated camera picture, ® Spaceball control device)

operator has to be alert not to cause a collision between the manipulator limbs and objects i n the environment. This is not an easy job, because the distances between the manipulator and these objects can hardly be esti-mated from the available camera pictures. E v e n worse, sometimes a dangerous object isn't even visible i n the camera picture currently observed b y the operator.

1.3 The MMI for the displacement task

I f we want the human operator to avoid collisions, the M M I for the displacement task has to provide the operator a l l the information he needs to be able to estimate and control the actual risk o f a collision for all parts o f the manipulator. T w o instruments can provide the necessary information: sensors measuring the current manipulator position, and (obviously) the available caméras. In the case o f the European Robot A r m , a number o f caméras are located on the ISS (environment caméras), and others are mounted on the E R A itself (four robot caméras; see Figure 1). Besides, the actual joint angles are measured by angle sensors, and the

locations o f the elements o f the ISS are registered i n a geometry database (the w o r l d model).

This paper w i l l guide y o u through three stages in the design o f the D U T interface for the displacement task. In the first stage, the information to be presented at the display-side o f the interface has been selected. Here, one o f the available camera pictures has been chosen to be the central camera picture on the interface console. In the second stage, the six Spaceball D O F ' s have been mapped consciously to the movements o f the manipula-tor i n that picture (design o f the control method). T h i r d , a graphical overlay has been designed. This overlay emphasises the spatial information i n the central camera picture, and presents additional information about the collision risk at the locations that are invisible i n it.

2. THE DISPLAY-SIDE OF THE INTERFACE 2.1 Introduction

In current teleoperation testbeds (e.g. Pauly and Kraiss, 1995; B l a c k m o n and Stark, 1995; S i l v a and Gonçalves, 1993; Bejc2y, 1996) all o f the available camera pictures and position information are often integrated i n a small number o f spatial information displays. E a c h o f these displays presents a different view o f the remote environ-ment to the operator. This view can either be an existing camera picture with a spatial graphical overlay, or a

synthetic spatial image o f the environment (an artificial camera picture). In both cases, 3 D computer graphies are used to visualise the available position information.

2.2 Two sub tasks in the displacement task

The availability o f multiple viewpoints o f the remote site is important for planning the rough collision-free path to the desired location o f the manipulator; i n this planning

subtask, information about the whole w o r k i n g

environ-ment is o f major importance. B u t w h i l e m o v i n g along a chosen path, that is nor the case anymore. F o r this

(3)

control subtask, the operator requires detailed informa-tion about the collision-risk at the current manipulator Iocation only. I f he is expected to extract this informa-tion from the same informainforma-tion displays as the ones used for the planning subtask, the operator may fmd difficulties, especially i f the manipulator has to cover large distances. Then, multiple (artifïcial) camera pic-tures can show valid information at the same time. E.g. one picture might show the current position o f the end-effector, and another one might show a hazardous object at short distance from the manipulator base. Then, it can be difficult to decide on ruture control actions.

U n t i l now, relatively small attention has been paid to the operator's information needs i n the control subtask. In most o f the current teleoperation testbeds, the above mentioned planning-oriented displays are applied to pre-program the desired displacement (e.g. B l a c k m o n and Stark, 1995), or to perform the j o b i n (semi-) supervisory control (e.g. Park, 1991). Generally, fiilly manual con-trol o f gross motions is only a redundant concon-trol method meant for extraordinary situations i n which the normal control method is malfunctioning. Consequently, there is no proper display concept for the control subtask. F o r this reason the development o f the D U T interface for the displacement task has mainly been focused on that part o f the job. It is assumed that suitable planning-oriented displays are available, and that the operator already knows the rough collision-free path to the desired Ioca-tion while using the informaIoca-tion display for the control subtask.

2.3 The information display for the control subtask

In the ideal situation, all the information the operator needs for the control subtask is visualised in one single spatial image o f the current manipulator Iocation. T o ensure that this image w i l l always show valid informa-tion, the viewpoint o f the image has to move simultane-ously with the manipulator movements. In the E R A case, the viewpoint for a synthetic 'best v i e w ' o f the current manipulator environment can always be computed from the actual manipulator position and the ISS w o r l d model. For a teleoperator with m o v i n g base, Das (1989) pro-posed to calculate the viewpoint from w h i c h the operator can see the end-effector, and the two objects at the shortest distance from the manipulator. Then, the opera-tor always controls the movements o f the end-effecopera-tor in the best view o f the area w i t h the largest danger o f a collision. Unfortunately, with this method, the automatic movements o f the artifïcial camera are tied up with the locations o f objects in the actual environment o f the manipulator. Then, it is difficult for the operator to pre-dict the camera movement that w i l l result from his intended manipulator movements. Therefore, he must check in which way he looks at the remote environment after each control action. This 'mental displacement' o f the viewpoint w i l l take more time i n case the viewpoint change is large (experiments carried out by Kleinhans (1992) confirm this theory).

A more intuitive way to move the camera viewpoint simultaneously with the manipulator movements can be realised b y using the picture from an (artifïcial) robot camera. If the control method is consciously designed, the operator w i l l k n o w exactly in w h i c h direction the 'home l i m b ' o f the camera w i l l move for each elemental movement o f the manipulator. Then, the mental view-point displacement w i l l hardly take any time. Ultimately, the mapping between the movements o f the control device and the camera movements results i n a feeling o f

telepresence (Sheridan, 1992): the operator feels as i f he

controls the movements o f his o w n eyes i n the remote environment and flies along with the manipulator. In that case, he might w e l l be able to perceive spatial informa-tion in the camera picture in a similar way as in daily life. A c c o r d i n g to Gibson (1979), the flow patterns in the retinal picture o f the human eye (optie flow) form an essential cue for body motion perception in daily life. W h i l e displacing a space manipulator, the home limb o f a robot camera w i l l never move in the accompanying camera picture. The operator must perceive the manipulator motion from the resulting movements o f the objects visible i n the background o f the picture. This continuous flow o f objects might w e l l be an analogue cue for manipulator motion perception as optie flow is for body motion perception in daily life.

In the D U T spatial information display for the control subtask, the above mentioned idea has been adopted. The picture of the elbow camera mounted on the E R A forearm (camera (D i n Figure 1) is the central spatial image i n the display. Figure 3 shows the elbow camera picture as it is animated in the expérimental facility. The picture w i l l always show the movements o f the end-effector, and two other parts o f the manipulator often i n danger o f a collision: the wrist and the forearm. Since the elbow camera is mounted near the elbow, the forearm w i l l partially cover the operator's view o f the actual environment at any time.

(4)

The usage o f the elbow camera picture makes two demands o n the further design o f the M M I for the control subtask. First, a control method has to be found, that enables the operator to predict the camera move-ments that w i l l resuit from his intended control actions. Second, the graphical camera overlay has to provide information about the danger o f a collision for the invisible parts o f the manipulator: the upper arm and the backside o f the forearm. In an ideal situation, the visualisation o f the collision risk i n the overlay suggests the control actions required to minimise this danger simultaneously. The graphical overlay has to be adapted to the applied control method to achieve this aim. Therefore, the choice o f the control method preceded the development o f the graphical overlay.

3. THE CONTROL-SIDE OF THE INTERFACE

3.1 Introduction

W i t h the force-activated Spaceball, the translational and angular velocity o f a spatial object in a three-dimen-sional workspace can be controlled intuitively. The magnitude o f the force (torque) applied to the Spaceball determines the magnitude o f the object's translational (angular) velocity. The direction o f the applied force (torque) determines the direction o f the object's transla-tional (angular) velocity. So, i f the user grasps the Spaceball as i f he grasps a car's gear lever, he might feel it as i f he grasps the controlled object (virtual grasping, see Figure 4).

N o r m a l l y , i f a Spaceball is used to control the move-ments o f a robot, the principie o f kinematic control is applied. W i t h this method, the operator virtually grasps the effector. After he has defined a desired end-effector pose change, the joint velocities necessary to attain the desired pose change are automatically com-puted from the manipulator's inverse kinematics. The implementation o f kinematic control requires the choice o f a control base frame. T h i s is the coordinate frame i n which the operator spécifies the desired end-effector pose changes. After the control base frame has been chosen, the mapping method must be selected. This method defines i n w h i c h w a y the six Spaceball D O F ' s are mapped to changes o f the end-effector pose in the control base frame.

3.2 The choice of the control base frame

The origin o f the control base frame (the control origiri) is imaginarily and inseparably linked to a part o f the

F i g u r e 4 ' V i r t u a l grasping' o f a 3D object with the Spaceball control device

F i g u r e 5 Frame locations

space manipulator. The orientation o f the frame defines the principal movements o f the end-effector. The opera-tor can define a desired translation o f the end-effecopera-tor as a combination o f three orthogonal translations i n the directions o f the frame axes. A desired change i n orientation can be defined with a rotation vector. The direction o f this vector defines the direction o f the rotation axis; the vector length defines the rotation angle. Position and orientation o f the control base frame have to be chosen carefully. T o avoid mental rotation prob-lems, the orientation o f the frame i n the elbow camera picture should never change. Therefore, the orientation o f the control base frame has been equated to the orien-tation o f a frame imaginarily linked to the elbow camera (the camera frame; see Figure 5). The frame position (as defined by the control origin) determines the point that is insensible for rotation commands. A t first sight, it seems wise to place the control base frame upon the end-effector (the end-end-effector frame; see Figure 5). In this case, the end-effector position and -orientation can be controlled separately. E.g. i f the control origin is located at the end-effector tip, the operator can first m o v e the tip to the desired location. After that, the orientation o f the end-effector can be corrected without changing the tip position. Unfortunately, i n the case o f the E R A , the usage o f an end-effector frame has a major drawback. In almost all poses o f the manipulator, a movement o f the end-effector i n one o f the principal movement directions w i l l require rotations o f all six E R A joints. Because o f this, all manipulator limbs w i l l move i n different direc-tions during the change o f the end-effector pose. Then it w i l l be difficult for the operator to predict the resulting limb and camera movements. A s a result, he can hardly control the collision risk around the limbs.

T o avoid the mentioned problems, the control base frame has been placed at the end o f the forearm: the

wrist (see Figure 5). N o w , a translation i n the direction o f one o f the frame axes requires rotations o f the two shoulder joints and/or the elbow joint only (see Figure

6). Generally, an end-effector rotation defined by a

rota-tion vector located at the control origin w i l l still require movements o f all joints. But this number o f joint rotations can now be decreased i f the desired orientation changes are no longer specified w i t h rotation vectors.

(5)

Wrist sideways

Wrist downward Wrist forward

= desired end-effector movement = resulting joint rotation(s)

F i g u r e 6 Manipulator movements resulting from the three principal translations o f the end-effector Note that i f the axes o f the three wrist joints (joints I V

through V I i n Figure 1) w o u l d have intersected at the control origin, only rotations o f these joints w o u l d be needed to rotate the end-effector i n the direction o f commanded rotation vector. This situation can be ap-proximated i f a change i n orientation is specified with a sequence o f wrist joint rotations, instead o f a rotation vector. W i t h this alternative method, each joint rotation defines a principal rotation o f the end-effector: joint I V influences the pitch-rotation, joint V influences the yaw-rotation, and joint V I influences the roll-rotation. Each o f the rotational Spaceball D O F ' s is mapped to one o f the principal rotation directions. Since the wrist joints are all located i n the region o f the control origin, the operator still feels as i f he controls the end-effector pose in the control base frame.

The proposed semi-kinematic control method results i n distinct responses on translation and rotation commands. In most cases, a displacement o f the end-effector in a principal direction w i l l result i n a rotation o f one single joint. O n l y a translation i n z-direction ('forward') w i l l result i n simultaneous rotations o f two joints (see Figure 6). These distinct responses make it very easy to predict the limb movements resulting from a specific control action. A s a result, the camera movements can also be predicted easily (note that the camera w i l l never move after a rotation command). Therefore, this control met-hod has been implemented i n the D U T interface for the displacement task.

3.3 The choice of the mapping method

Just like the six principal movements o f the end-effector are defined b y the control base frame, the six principal movements o f the Spaceball are defined b y the

Space-ball frame. T h e origin o f this frame is imaginarily linked

to the centre o f the Spaceball sphere. The orientation o f the frame defines the mapping method: a translation or rotation o f the Spaceball i n the direction o f one o f the frame axes results i n an analogous displacement o f the end-effector i n the control base frame.

Earlier man-machine experiments with the D U T inter-face for space manipulator positioning tasks have shown the benefits o f the downward mapping o f translations (Buiël and Breedveld, 1995). W i t h this method, the operator must push the Spaceball downward to translate the end-effector forward i n the elbow camera picture (i.e. i n the z-direction o f the wrist frame). So, there is a 90° rotation between the end-effector translations

ob-served i n the camera picture and the translations o f the Spaceball w i t h respect to the top o f its supporting table (Figure 4 shows the Spaceball frame orientation that's required for this method). W i t h the downward mapping, the tabletop serves as a référence plane. Just like i n window-based software - where the movements o f the mouse parallel to the tabletop correspond to the move-ments o f the arrow-pointer parallel to the desktop - the movements o f the Spaceball parallel to the tabletop (the control référence plane) correspond to the movements o f the end-effector parallel to the elbow camera lens (the movement référence plane). Because o f the demon-strated advantages o f this mapping method, it has also been implemented i n the D U T interface for the displace-ment task. In accordance with the downward mapping o f translations, the Spaceball x- and y-rotation have been mapped to the yaw- and pitch-rotation o f the end-effector resp. Finally, the Spaceball z-rotation has been mapped to the end-effector roll-rotation.

4. THE GRAPHICAL CAMERA OVERLAY

4.1 Introduction

The chosen control method ensures that the operator can predict the resulting limb movements for every displace-ment o f the end-effector. I f a limb is i n danger o f a col-lision, the graphical overlay for the elbow camera picture can n o w assist the operator i n avoiding the collision b y suggesting the limb displacement that's needed to decrease the collision danger. A n intuitive way to do this is to visualise the shortest distance between the limb and the hazardous object i n the environment. A t D U T , de Beurs (1995) developed a computer algorithm for Computing distances between objects i n the ISS w o r l d model. F o r each pair o f objects, the two object nodes at closest distance are calculated b y the algorithm i n very little computing time. I f both nodes are visible i n the camera picture, the distance can be visualised b y means o f a cleverly shaped distance line (see 4.2). I f they're not visible, the control actions required to decrease the collision risk have to be visualised i n a different w a y (see 4.3).

4.2 Visualisation of distances with distance Unes

Figure 7 shows the basic idea for the graphical overlay with distance lines (de Beurs, 1995). Three dashed Unes visualise the closest distances between the sp'ace manipulator and the éléments o f a central truss at the ISS. T o the operator, it seems as i f each o f the lines is

(6)

F i g u r e 7 E l b o w camera picture with basic distance lines

actually present in the remote environment. Pilot experiments with this distance line overlay demonstrated the usefulness o f the added distance information. But they also indicated two problems inherent to the shape o f the distance lines. First, i f both the environment-end and the manipulator-end o f a distance line are (almost) in one line with the v i e w i n g direction o f the elbow camera, the operator can hardly estimate the length o f the line. Second, the danger o f a collision increases at the m o -ment the length o f the distance line decreases. This is a major drawback from an ergonomie point o f view: at the moment the danger grows, its display indicator becomes less eye-catching. Ultimately, at the moment a collision occurs, it isn't even visible.

Next to the shape o f the basic distance line, Figure 8

shows three alternative shapes for this line that (partially) come up to the observed problems. The first option, the elastic line, solves the first problem only. The cross-section o f this line increases after a décline o f the line's length. Indeed, i f it is observed from one o f its ends, the line becomes more distinctive at the moment its length decreases. B u t i f it is observed from the side, the line w i l l still be flattened. The second option, the

sphère, solves this problem. Here, a sphère marks the

environment-end o f the original distance line. The sphère radius increases while the indicated distance decreases. Since its size now grows i n all directions, the distance indication w i l l be clearly visible from any side. Note that at the moment the sphère radius equals the remaining distance, the manipulator w i l l intersect the sphère (distance < 5 cm i n Figure 8). A t this moment, the m a x i m u m amplitude o f the limb vibrations due to the limb flexibility roughly equals the remaining distance, and major attention is needed from the operator. T o indicate this, the sphère changes color (green turns red).

In this way, the sphère provides information about the manipulator flexibility and solves both o f the observed visualisation problems at the same time. B u t it intro-ducés a new problem also. A t the moment the sphère does not intersect the manipulator, it does not visualise the direction of the distance line. The third (and finally preferred) option, the raindrop, solves this problem. Here, the sphère merges w i t h a dashed cone w h e n the indicated distance exceeds the sphère radius. The top o f the cone marks the manipulator-end o f the original distance line. A t the moment the sphère radius exceeds the distance once more, the raindrop turns into a sphère again. Figure 10 shows the resulting elbow camera picture with raindrop-shaped distance lines. Once again, the picture shows three distances to a primary truss o f the ISS. A t the location o f the large sphère, the distance between the manipulator and the truss is almost zero.

Distance Basic line Elastic line Sphère

Raindrop

35 cm

" \

. .1 ...

. . !

# • • • •

X.

20 cm

;.;..[

• t :

# •••

4

4 cm

^

1.5 cm

• • •

-Manipulator side

Environment side

(7)

F i g u r e 10 E l b o w caméra picture with raindrop-shaped distance lines

4.3 Visualisation of the collision danger outside the

visible area

T w o parts o f the E R A are invisible i n the elbow camera picture: the upper arm, and the backside o f the forearm. The collision danger for the backside o f the forearm can be visualised b y virtually transforming the forearm into a transparent glass tube. A s a resuit, the raindrops cur-rently located at the backside o f the forearm w i l l be visible at the back o f the tube. O f course, this strategy can not be applied to visualise the collision danger around the upper arm. The distance lines located near this manipulator limb w i l l normally be located outside the viewing volume o f the elbow camera. F o r each o f thèse invisible raindrops, the overlay must visualise the

size of its sphere and the direction o f its dashed cone in a

différent way.

180° \ Starting point distance line

F i g u r e 9 Collision danger near the upper arm

Figure 9 shows an example o f a situation i n w h i c h the upper arm is i n danger o f a collision. Here, a hazardous object is located close to the bottom o f the upper arm. The direction o f the accompanying distance line is visualised in a cross section o f the upper arm. Because o f the cylindrical shape o f the upper arm, this line w i l l always be directed perpendicular to the surface o f the upper arm. The radial position o f the line (quantified b y the angle ex) is an important eue for the détermination o f future control actions. Since the line is located at the bottom o f the upper arm, the

Operator

knows that a collision w i l l occur i f he moves the wrist forward. In the same way, i f the line w o u l d have been located o n the left side o f the upper arm, a collision w o u l d have occurred i f he had moved the wrist to the left.

F r o m Figure 9, it can be concluded that the radial position o f each invisible distance line implicitly shows the control action that's needed to decrease the local collision risk. This important observation is utilised i n the display indicator for the collision danger around the upper arm (see Figure 11). T h e main element o f this indicator is a large circle. T h i s circle represents the cross section o f the upper arm. F o r each invisible distance line, a sizeable sphere marks the location o f its manipulator-end (e.g. the sphere located at the bottom o f the circle i n Figure 11 represents the location o f the hazardous object visible i n Figure 9). Just like the raindrops i n the elbow camera picture, the sphere radius increases while the indicated distance decreases. F r o m the radial position o f a sphere, the Operator can read the control action that's needed to decrease the local collision risk.

5. CONCLUSIONS AND FUTURE RESEARCH

In this paper, most attention has been paid to the problem o f collision avoidance during gross motions o f a space manipulator. The developed graphical overlay with spherical and raindrop-shaped distance indicators

F i g u r e 11 Visualisation o f collision danger near the upper arm

(8)

clearly visualises the locations currently i n danger o f a collision, and the control actions that are needed to decrease the danger. A s a resuit, the overlay provides a solution for the first gênerai problem i n teleoperation tasks: the lack o f spatial information i n the available camera picture(s). A t the same time, a solution for the second problem - the flexibility o f the manipulator limbs - is provided. Raindrops and sphères implicitly show the m a x i m u m amplitude o f the manipulator limb vibration caused by the flexibility o f the limbs. O n l y the last problem - the introduction o f time delays when the operator controls the manipulator on earth - has not been considered yet.

In the near future, man-machine experiments w i l l be carried out to demonstrate the usefulness o f the provided distance information. Finally, the time delay problem w i l l be considered. T o eliminate this problem, the devel-oped spatial information display w i l l be transformed into a setpoint display (Breedveld, 1995b). A setpoint display visualises the operator's control actions (i.e. the setpoint for the manipulator velocity) immediately after they have been carried out. E . g . the movements o f a transparent (phantom) manipulator can visualise the current velocity setpoint. Since he can directly see the results o f his control actions, the operator does not have to wait until actual camera pictures arrive at his location.

ACKNOWLEDGEMENTS

This research is supported b y the Dutch Technology Foundation S T W (Utrecht, The Netherlands) and Fokker Space B . V . (Leiden, The Netherlands). See

http://www-mr.wbmt.tudelft.nl/~buiel on the W o r l d W i d e W e b for

additional information.

REFERENCES

Bejczy, A.K. (1996). New Technologies: Teleoperators. IEEE Industrial Electronics Society Newsletter, March 1996, pp. 4-12. Beurs, M. de (1995). Graphical Displays for Collision Avoidance (in

Dutch). Report A-697, Delft University of Technology, Depart-ment of Mechanical Engineering and Marine Technology, Lab. for Measurement and Control, Delft, The Netherlands. 124 p. Blackmon, T.T. and L.W. Stark (1995). Human-machine interface for

model-based expérimental telerobotics system. Pre-prints 6th IFAC Symposium on Analysis, Design and Evaluation of Man-Machine Systems, Cambridge, Massachusetts, USA. 6 p.

Bos, J.F.T. (1991). Man-Machine Aspects of Remotely Controlled Space Manipulators. PhD Thesis. Delft University of Technol-ogy, Department of Mechanical Engineering and Marine Technology, Delft, The Netherlands. ISBN 90-370-0056-8. 177 p. Breedveld, P. (1995a). The Development of a Man-Machine Interface

for Telemanipulator Positioning Tasks. Pre-prints 6th IFAC Sym-posium on Analysis, Design and Evaluation of Man-Machine Systems, Cambridge, Massachusetts, USA. 6 p.

Breedveld, P. (1995b). The Development of a Prédictive Display for Space Manipulator Positioning Tasks. Proceedings 14th Euro-pean Annual Conference on Human Décision Making and Manual Control, Delft, The Netherlands. 8 p.

BuiSl, E.F.T. and P. Breedveld (1995). A Laboratory Evaluation of four Control Methods for Space Manipulator Positioning Tasks.

Pre-prints 6th IFAC Symposium on Analysis, Design and

Evaluation of Man-Machine Systems, Cambridge, Massachusetts, USA. 6 p.

Das, H. (1989). Kinematic Control and Visual Display of Redundant Teleoperators. PhD Thesis, Massachusetts Institute of Technol-ogy, Cambridge, Massachusetts, USA. 108 p.

Dooling, D. (1995). Research outpost beyond the sky. IEEE Spectrum, October 1995.

Gibson, J.J. (1979). The ecological approach to visual perception. Houghton Mifflin, Boston, USA. ISBN 0-395-27049-9. 332 p.

Kleinhans, M. (1992). Mental rotation of topographical maps (in Dutch). Internal report. Leiden University, Faculty of Social Sciences, Theoretical Psychology Group, Leiden, The Nether-lands. 14 p.

Pauly, M. and K.F. Kraiss (1995). A concept for symbolic interaction with semi-autonomous mobile systems. Pre-prints 6th IFAC Sym-posium on Analysis, Design and Evaluation of Man-Machine Systems, Cambridge, Massachusetts, USA. 6 p.

Park, J.H. (1991). Supervisory Control of Robot Manipulator for Gross Motions. PhD Thesis, Massachusetts Institute of Technol-ogy, Cambridge, Massachusetts, USA. 99 p.

Silva, F. and J.G.M. Goncalves (1993). Human Computer Interface for the Tele-Operation of a Manipulator Arm. Technical Note

1.93.129, C.E.C Joint Research Centre, Ispra site, Institute for Systems Engineering and Informaties, Ispra, Italy. 26 p.

Sheridan, T.B. (1992). Telerobotics, Automation, and Human Super-visory Control. The MIT Press, Cambridge, Massachusetts, USA. ISBN 0-262-19316-7.415 p.

Traa, M. (1995). Dutch robotic arm aids Russian astronauts in outdoor jobs (in Dutch). Polytechnisch Tijdschrift (separate editions for mechanics, electronics and process control), volume 50, January

1995. Ten Hagen & Stam Publishers, The Hague, The Nether-lands. 4 p.

Woerkom, P.Th.L.M. van, A. de Boer, M.H.M. Ellenbroek and J.J. Wijker (1994). Developing algorithms for efficiënt simulation of flexible space manipulator operations. 45th Congress of the In-ternational Astronautical Federation, Jerusalem, Israël. 13 p.

Cytaty

Powiązane dokumenty

Wybierz obrazek i opisz go używając zwrotów podanych powyżej.. Ułóż 10 zdań

Tytuãowe stwierdzenie dotyczĈce formowania sič nowego paradygmatu w edukacji traktowaþ zatem naleİy jako formowanie sič pewnego frontu myĤlowego, który najprawdopodobniej

Conceived of initially as a medium for presenting the ideas of Polish philosophers to a larger international forum, our journal has been slowly focusing on the issues important

Parliamentary elections have brought success to 8 political parties, which managed to pass the 5% threshold at the Election Day (SMER- SD, SaS, oľano-novA, SnS, ĽSnS,

In the workspace, preci- sion of movement was also higher, so the real path and velocity (fig. 10.) were close to the desired path and velocity of point C, even in initial phase

There are new technical means, methods of work and education for people with disabilities, making it possible to use a computer, depending on the degree of

14 shows comparison between filtered acceleration values of the second sensor S-V in the translational direction and the acceleration as the simulated result of the MBS analysis

Considering the case of the TQFTs derived from the Kauff- man bracket, we describe the central extension coming from this representation, which is just a projective extension..