• Nie Znaleziono Wyników

UAV Haptic Interface for Dynamic Obstacle Avoidance

N/A
N/A
Protected

Academic year: 2021

Share "UAV Haptic Interface for Dynamic Obstacle Avoidance"

Copied!
27
0
0

Pełen tekst

(1)

Delft University of Technology

UAV Haptic Interface for Dynamic Obstacle Avoidance

Piessens, Tom; van Paassen, Rene; Mulder, Max DOI

10.2514/6.2020-1112

Publication date 2020

Document Version Final published version Published in

AIAA Scitech 2020 Forum

Citation (APA)

Piessens, T., van Paassen, R., & Mulder, M. (2020). UAV Haptic Interface for Dynamic Obstacle Avoidance. In AIAA Scitech 2020 Forum: 6-10 January 2020, Orlando, FL (pp. 1-26). [AIAA 2020-1112] (AIAA Scitech 2020 Forum; Vol. 1 PartF). American Institute of Aeronautics and Astronautics Inc. (AIAA).

https://doi.org/10.2514/6.2020-1112 Important note

To cite this publication, please use the final published version (if applicable). Please check the document version above.

Copyright

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy

Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.

(2)

UAV Haptic Interface for Dynamic Obstacle Avoidance

T. L. Z. Piessens∗, M. M. van Paassen†, and M. Mulder‡

Delft University of Technology, Delft, 2629HS, The Netherlands

Teleoperation by definition means a deprivation of the teleoperator’s senses, which can pose a handicap when operating, e.g., an UAV, in an unknown, perhaps even dynamic environment. Noticing moving obstacles in such a situation can prove to be quite difficult and the UAV runs the risk of colliding with them. Previous work designed a shared control haptic interface based on the artificial forcefield method to help navigating in a static environment. This interface was evaluated for its usability in a dynamic environment where linearly moving obstacles were present. Offline simulations show that the existing interface would have difficulty in preventing collisions with moving obstacles. A new method is developed based on the velocity obstacles method. The new design supports the operator by using the haptic side stick to guide the operator out of a so-called “forbidden velocity zone”. Offline tests show that the developed algorithm is indeed capable of avoiding both static and dynamic obstacles. It was implemented in a real-time simulator and investigated further with human-in-the-loop experiments. An initial test in a simulator with five participants shows promising results in avoiding sudden appearing obstacles

and moving obstacles in the open field. The haptic controller, however makes

maneuvering in tight spaces hard for the operator.

I. Introduction

The technology of Unmanned Air Vehicles (UAVs) is readily available and it is expected that it will become more accessible in the course of time. The applicability of UAVs ranges from surveillance, to delivering a payload or providing humanitarian aid.1 The latter one can take place in inhospitable areas, especially in an urban search and rescue (USAR) task. The first deployment of UAVs for humanitarian aid occurred in 2005 after hurricane Katrina and one of the latest examples is the use of UAVs at Fukushima for structural inspections. For such inspections the UAV has to approach structures within three metres in order for structural engineers to be able to inspect them.2 This requires good human-robot interaction and a good obstacle avoidance system.

Operating a vehicle remotely inherently means a deprivation of the senses.3

In order to aid the operator in flying in a cluttered environment, a haptic interface for colli-sion avoidance has been developed4,5,6,5,7,8,9,10.11 This interface provides haptic shared control

assistance, thus only informs the operator about a possible collision. The pilot then decides for himself whether to follow the advice or continue with the manoeuvre. A thorough literature study showed that the research performed at TU Delft’s Control and Simulation section into haptic col-lision avoidance for UAV’s has been followed-up by research at other institutions121314.15 All thisMSc Student, Control and Simulation Section, Faculty of Aerospace Engineering, Delft University of Technology; Kluyverweg 1, 2629HS, Delft, The Netherlands.

Associate Professor, Control and Simulation Section, Faculty of Aerospace Engineering, Delft University of Technology; Kluyverweg 1, 2629HS, Delft, The Netherlands.

Professor, Control and Simulation Section, Faculty of Aerospace Engineering, Delft University of Technology; Kluyverweg 1, 2629HS, Delft, The Netherlands, AIAA Associate Fellow.

Downloaded by TU DELFT on January 7, 2020 | http://arc.aiaa.org | DOI: 10.2514/6.2020-1112

AIAA Scitech 2020 Forum 6-10 January 2020, Orlando, FL

10.2514/6.2020-1112 AIAA SciTech Forum

(3)

research solely focuses on the collision prevention with static obstacles, however to date, no shared control solution could be found for teleoperation in a dynamic environment.

The aim of this project is to extend this haptic interface to support haptic cues that inform the pilot about incoming dynamic obstacles and advises him or her on how to act on them. The CAS should inform the operator about an incoming threat and support an evasive manoeuvre. A solution has been found in the velocity obstacle (VO) method, also known as the collision cone approach. This method is a conflict resolution tool that can be used for both moving and static obstacles16 and visualizes for the operator which velocities are available or lead to a conflict. The

VO method is used to advise a speed change in case of a conflict by generating a haptic signal that corresponds with this speed change. In addition the visualization of the velocity obstacles is used to aid the operator in understanding the generated haptic signals. The current use of artificial force fields will be evaluated for their usability in a dynamic environment.

II. Related work

VA

VB

P Z0

1

Figure 1. Vehicle B is crossing paths with obstacle A and by projecting its protection zone onto

obstacle A, a collision cone is constructed. In this case the current velocity setting is subsided in a forbidden velocity zone, which will lead to a loss of separation in the future.

A collision avoidance strategy that is suitable for both static and dynamic obstacle avoidance was found in the VO method. The method requires the relative distance to an obstacle, velocity vector and size of an obstacle and the velocity vector of the UAV. The goal of the VO method is to ensure a minimum separation between objects, for this a so-called protected zone (PZ) is defined around the ownship.17 This method is promising as an airborne conflict resolution tool18,19 and proved itself in autonomous mobile robot navigation20 capable of avoiding both static and dynamic obstacles. The VO method can be applied in complex dynamic environments21 and can take the

operators intent and the vehicles dynamic restrictions in consideration.19 In this research however, the interface will help to avoid obstacles that are moving linearly at a constant speed.

The VO method produces forbidden velocity zones as shown in Figure 1. Since the stick input of the UAV directly translates to a speed and yaw rate,7 this forbidden velocity zone can be seen as a forbidden region virtual fixture (FRVF).22 Using haptic feedback to guide an operator has proven that it can help with supporting operators in teleoperation tasks2324.25 But it has to coincide with

the operators internal representation of the system, in other words, if the haptic feedback is not intuitive it can actually complicate the task.26 Furthermore, if the suggested solution of the system

(4)

does not coincide with the operators preferred input, this will result in a conflict, leading to a higher workload for the operator27.28 This poses a problem for the design of the haptic forces which are

meant to guide an operator in an unpredictable dynamic environment. Moving obstacles can appear from behind buildings and other large static obstacles in an urban or indoor environment. This can in turn impose sudden restrictions on the UAV’s movement. It is unknown if these sudden changes can be effectively and acceptably communicated to the operator.

In an attempt to solve this problem, a new display has been designed to aid the operator in understanding the generated haptic feedback. The display will show the forbidden velocity zones and how the current velocity relates to them. This display is egocentric, meaning that it is fixed to the body axes reference frame. Using exocentric displays alongside an endocentric display (the camera view in this case) has proven to increase an operators understanding of the environment and his sense of orientation.29 The exocentric display is best suited for navigating close to obstacles and in supporting the operator in avoiding a collision when it is in the egocentric reference frame (ERF), meaning that the heading is always displayed up.30 Expanding the interface with another

display, means the operator has three displays at his or her disposal for aiding in navigation. This can potentially lead to human performance issues if not careful. Studies show that processing the information of two displays can lead to operators focus too much on outstanding and prominent information and forget to attend to the other display for a more complete overview and leads to cognitive tunnel-vision.31 With a third display the operators’ attention will be spread even more.

III. System Architecture

The obstacle avoidance in the new haptic interface is based on a different approach, resulting in a redesign of the haptic interface. This section will explain the current implementation of the VO method for the haptic interface. A schematic representation of the system is viewed in Figure 2 and depicts how the system in the HMI Lab simulator is realized. Where the previous interface would warn the operator solely through haptic feedback, the current interface informs the operator with a new display as well. This display visualizes the velocity obstacles, the current speed and a safe speed Vsaf e selected by the CAS.

The algorithm however was firstly developed offline before being implemented in the HMI Lab simulator. As a result there are two realizations of the algorithm with slight differences. This section will firstly explain the general approach that applies to both these realizations. Afterwards specific differences in the modules will be addressed.

A. The UAV model

As can be seen in Figure2, the UAV is controlled with the use of a side stick. The model that has been used for the simulations is the same as used by Lam and is pictured in Figure3. The speed has a maximum of Vmax = 5 [m/s], with a maximum acceleration amax= 1 [m/s2] and is controlled

by a longitudinal deflection of the stick δy. The yaw rate has a maximum of ˙ψmax = 0.32 [rad/s],

with a maximum acceleration ¨ψmax= 2 [rad/s2] and is controlled by a lateral deflection of the stick

δx. Furthermore, the operator can direct the UAV forwards and backwards, but not sideways. It’s

worth noting that the stick input gives a velocity command in the rotating Geodetical reference frame, which is then transformed to the Geodetical reference frame.

B. The VO method CAS

As mentioned earlier, the VO method requires the relative distance to an obstacle, its velocity vector and size of the obstacle and the velocity vector of the UAV. The output in case of a detected

(5)

cognitive control

NMS human operator

side stick UAV

F

force feedback VO method LiDAR environment mapped environment D I S P L A Y S

available speeds display

camera display navigation display

Figure 2. Schematic representation of the system. 172 APPENDIX B + + -1,1 -1,1 -1,1 0.32 -2,2 5 0 rate rate yaw rate limit limit limit limit velocity gain gain command command cart2pol pol2cart yaw dynamics longitudinal longitudinal lateral ψ ˙ψ ˙y ˙x ˙x, ˙y x, y δx δy Vxc ˙ψc 1 0.3s+1 1 0.2s+1 1 0.18s+1 1 0.18s+1 1 s 1 s

Figure B.1: UAV model used in the experiments.

B.2 UAV dynamics

The experiments were conducted with simulated UAV dynamics. The model was made with Simulink and converted to C code using Simulink Real-Time Workshop. Figure B.1 shows a schematic representation of the model. The model represents a control-augmented UAV helicopter that makes it easy for the operator to control the UAV in the horizontal plane. Additionally, the model represents a “fly mode”, i.e., there is no hover in the sideward directions possible. A lateral deflection results in a turn rate.

The UAV commands are in the rotating Geodetical reference frame and the vehicle states are in the Inertial reference frame. The longitudinal stick deflection is converted to a dimensionless value, δy, between -1 and 1. With the velocity gain the stick deflection results in a velocity command with a maximum of 5 m/s. The rate limit results in a maximum acceleration of 1 m/s2.

By using transformations between Cartesian and polar coordinates (cart2pol and pol2cart) the vehicle states (x, y, ˙x, ˙y) are in the Inertial references frame. The UAV velocity, Vx, in the rotating Geodetical reference frame is Vx =˙x2+ ˙y2, as shown in Figure B.2. The relation between the commanded velocity, Vxc, and the true velocity, Vx,

both in the rotating Geodetical reference frame, is 1

(0.3s + 1)(0.18s + 1) with a maximum

acceleration of 1 m/s2.

A lateral stick deflection with dimensionless value, δx, between -1 and 1 results in a yaw rate command with maximum turn rate of 0.32 rad/s. The relation between the

Figure 3. The UAV model as used by Lam.8

conflict is a safe speed Vsaf e. The steps that are taken to determine Vsaf e are shown in Figure4.

The CAS receives the following information: • measured distances for all the obstacles

• measured distances for only the static obstacles • the UAVs velocity, heading and position

• a list of the dynamic obstacles with their velocities and positions

The UAV senses its environment with the help of an on-board LiDAR system that scans the environment in a horizontal plane. It is the same one as used by the haptic interface that this

4 of26

American Institute of Aeronautics and Astronautics

(6)

design is based on,7 with the added option of increasing the number of rays n, with the current

resolution being n = 500. The range of the rays is 100 [m]. Figure 5(d) shows how the rays are defined.

1. Sensing the environment

The measuring of the environment is done in every cycle step of the simulator. Every cycle step of the simulator the sensor system swipes the environment as it were, taking periodic samples which result in a point representation of the environment as shown in Figure5(b). These points then have to be grouped accordingly by spectral clustering. Since the focus of this research is not to develop such a method, a simple method to determine the affinity between points was used. First the measurements are grouped in terms of uninterrupted measurements. In the situation of Figure 8 this means that there will be two groups: the points that correspond to the measurement of obstacle C and the points that correspond to the measurements of obstacles A and B. Additionally, if an obstacle has been measured at ray number 1 and also at ray number n, these measurements at the beginning and ending of the scan are considered to be one group and are ’stitched’ together.

Assign radials to measured points Sort the obstacles Identify the static obstacles

Get the dynamic obstacles velocity Create the collision cone Calculate the allowed velocities if(conflict=true){ determine Vsaf e }

Figure 4. Flowchart for the VO algorithm.

The next step is to see whether a continuous measurement has multiple obstacles in it. This is

(7)

simply done by checking the difference in distance between two neighboring points. If the difference is greater than a certain constant dsplit, in this design dsplit was 3 [m], the group will be split at

that point. The dilemma with this approach is visible in Figure5(b), either dsplit is big enough to

incorporate measure point 6 with the group that belongs to A, or small enough to ensure the groups of A and B are seen as separate. In consideration of this shortcoming, the dynamic obstacles will not be crossing each others path and the CPA between two obstacles will be significantly larger than the chosen dsplit in the simulator. This still means that a group which belongs together can

come out as multiple smaller groups or points. As long as they then are associated with the correct obstacle and inherit the correct state this is not a problem. In this case they become a sub-obstacle and their state is equal to their ’parent’, which is either a static position or a motion with a certain speed and direction.

A B C

1

(a) Three separate obstacles are being measured by the UAV X-axis Y -axis A B C 0 + 1+ 2+ 3+ 4+ 5 + 6 + 7 + 8 + 9 + + + + + 1

(b) The measurements are points in the body reference frame, defined with polar coordinates

1

(c) The static obstacles that belong to the map are scanned first, resulting in the points repre-sented by the circles. The second scan mea-sures both the static and dynamic obstacles. The points belonging to this second scan are repre-sented by the crosses

x-axis y-axis Dir ecti on ofm easur ement ∆θ 1 2 3 n n-1 n-2

(d) The LiDAR system scans the environment in a 2D plane with a resolution of n rays. The angle ∆θ is the angle between the rays and is defined as 2π 1n− 1

n2



Figure 5.

(8)

2. Determining the state of an obstacle

The current approach makes the assumption that the map in which the UAV is flying has been explored beforehand and the UAV knows of these obstacles where to expect them and can recognize them. This leaves the unidentified obstacles for examination. Estimating an obstacles velocity and keeping track of it from a mobile platform is a research topic in itself3233 and was not included in this research. Instead the obstacles state as defined in the simulation was used in the CAS. Static Obstacles The UAV compares the current LiDAR measurement to a stored map of the environment. In the simulator this map does not exist, but is created at the same time as the environment is scanned. The simulator visuals are created with OpenSceneGraph 3.0, which has the possibility of applying multiple node masks to objects. In this way, the visibility of objects to the LiDAR module can be controlled. In the simulator the environment is scanned twice, once for only the static obstacles and the other for both the static and dynamic obstacles. The measurement points that correspond with the map obstacles are then identified as static obstacles. This is visualized in Figure 5(c). As a result, the CAS is still aware of map elements if they were to be obscured by a new obstacle.

Dynamic Obstacles The point groups that remain are the unidentified obstacles. In the simu-lator these are the dynamic obstacles. The state of these obstacles is simply passed into the CAS algorithm, where position of the measured point groups are compared to the list of dynamic obsta-cles that are moving in the environment at that time. Any measurement that isn’t a map element and that is within a radius of 15[m] of a specific dynamic obstacle, will inherit that obstacle’s state. The state of the dynamic obstacles is defined in their own body-fixed reference frame. The VO method is based in the body-fixed reference frame of the UAV. So the obstacle’s velocity vector is translated in the following way:

" uobst

vobst

# "

cos(ψobst) −sin(ψobst)

sin(ψobst) cos(ψobst)

# "

cos(ψU AV) sin(ψU AV) sin(ψU AV) −cos(ψU AV)

#

3. Collision cone construction

Once the measure points are sorted into separate obstacles and the state of these obstacles has been determined, the collision cones (CC) can be constructed. The CC’s are used later on for the permissible speed calculation. The way the CC is constructed for static obstacles differs from the CC for dynamic obstacles. The UAV should be able to approach static obstacles while keeping a more than sufficient separation with dynamic obstacles. In this case a minimum separation of 5 [m] was used and the static obstacles should be approachable up to the protection zone of the UAV. This was achieved by constructing a collision cone with a time horizon.

Time horizon for static obstacles In order to be able to safely approach the static obstacles the distance left till a collision has to be calculated. For this the super-positioning of the PZ method as used by Damas34 was utilized. This formula assumes the UAV can be represented by a circle with radius RU AV with a PZ radius Rpz, together forming Rtotal. In the current setup RU AV = 1[m]

and Rpz = 1.6[m]. The formula for calculating this distance is presented below:

di(θ) = hi cos(θ− αi)− s R2total h2i + cos 2− α i)− 1 ! (1)

(9)

In formula1 the distance to collision di is calculated along each measurement ray at the angle θ, with αi− ∆αi ≤ θ ≤ αi+ ∆αi (2) where ∆αi = arccos q 1− R2 total/h2i 

and the angle αi is the angle of the sensor ray along which

the distance to the obstacle hi is measured. In Figure 6(a)this is visualized for αi = 0.

The placement of one circle will not suffice of course, unless the obstacle is a point mass. In this design, the distance between the circle centers dsep is defined by a chosen minimum distance

to a straight wall dmin, where dmin < Rtotal. The distance dmin is measured from the point where

the two circles intersect and then dsep can be calculated as for a standard isosceles triangle:

dsep= 2 q dmin2 − R2 total (3) R pz RU AV ∆αi hi di θ Rpz+ RU AV 1

(a) A projection of a circle with radius Rtotal on a straight wall with the UAV at a distance hi.

d

sep

d

min

R

total

(b) The number of circles that would be placed on a straight line is defined by the chosen minimum separa-tion distance dmin.

Figure 6.

In Figure 7the projection of the PZ onto a non-uniform wall is shown.

-8 -6 -4 -2 0 2 4 6 0 1 2 3 4 5 6 7

Figure 7. The distance to collision is calculated for each sensor ray by first subtracting Rtotalfrom the measured distance, this is shown with the green line. Then circles with a radius of Rtotalare placed on the measured edges of the obstacle and at each point where the sensor rays have traversed a distance

of dsep. These points are represented by the orange stars placed on the black line, which represents

a wall. The distance to each of these circles is calculated for each sensor ray that strikes them. The smallest measurements are kept, resulting in the distance to collision as represented by the red line.

(10)

With the distance to collision for each sensor ray known, the maximum speed allowed for that line can be calculated. A maximum speed for a certain direction means that it is the highest possible speed for which still a safe stop is guaranteed if the operator would apply maximum braking power. The UAV model has an amax which is assumed to be the same in every direction, see Figure 3,

and can also be considered as the maximum braking power. The maximum allowed speed vmax in

a certain direction is calculated by using the same formula as Damas,34 but the amax is multiplied

with a safety factor fs:

vmax(i) =pfs· 2amax· di+ (fs· a)2max∆T2− fs· amax· ∆T (4)

Where ∆T is the sampling time of the LiDAR system. The fs is there to leave room for error.

Otherwise the the operator would only feel a haptic correction if the actual vmax has been passed,

leaving no chance of stopping in time.

Collision cone for dynamic obstacles For moving obstacles a collision cone has no time horizon. This allows for constructing a collision cone by widening the measurement group collected with the LiDAR. The extra width is calculated in the following way:

∆α = arctan RU AV + Rpz dobst



Where dobst is the distance to the obstacles edge on the side that is widened. Figure 8(a) shows

the construction of a collision cone for a dynamic obstacle. 4. Permissible speed calculation

After every obstacle that has been measured is examined, the permissible speeds are calculated. This is done by first drawing up all the available speeds, as can be seen in Figure8(b). The UAV is assumed to be holonomic with a maximum speed of 5 [m/s]. The permissible velocities for a conflict free situation are then represented by a circle with a radius of 5 [m/s]. This set is then translated by subtracting the obstacle velocity Vobstacle that has been translated to the body reference frame,

see Figure 8(c). All possible velocities that are within the collision cone will lead to a conflict or collision and are cleared out as can be seen in Figure8(d).

After calculation of all the permissible speeds, a conflict free speed and direction is selected: Vsaf e. The CAS first selects the closest available speed in the direction of motion, otherwise a

change of heading is considered. If there was a Vsaf e selected in the previous cycle and it is still

valid, it will be used again. If the previous Vsaf e is not in the range of the permissible speeds

anymore, the next Vsaf ewill be searched in the same order as it would be when a conflict occurs for

the first time. Only now the closest Vsaf e will be selected with comparison to the previous selected

Vsaf e.

C. The offline and online design specifics

As mentioned at the beginning of this section there are two realizations of the VO based CAS: one for the offline simulations and one for the HMI Lab simulator. The VO method that the CAS is based on was first tested in an offline simulation in MATLAB®. This offline simulation model can be seen as the first version of the VO CAS and as a result it contains a few differences with the CAS in the HMI Lab simulator. This simulator runs on a middle-ware layer that allows users to implement and design real-time programs on distributed hardware more easily called DUECA and it is based on C++. Since the VO algorithm for the haptic interface was initially developed

(11)

X Y d1 d2 Rtotal Rtotal ∆α1 ∆α2 C oll isi on C on e RU AV 1

(a) The collision cone is constructed by adding the UAV radius RU AV and protection zone Rpz, together they are Rtotal, to the measured edges.

X-axis Y -axis V [m/s] 5 −5 −5 5 1

(b) In the CAS this permissible velocities map is con-structed by plotting the velocities as polar coordinates, with the step between the radial speeds being 0.1 m/s and the angle between the radials being π/25 rad.

X-axis Y -axis 5 [m/s] 5 [m/s] 1 (c) X-axis Y -axis V [m/s] 5 −5 −5 5 1 (d) Figure 8.

and tested in MATLAB®, the idea was to use the MATLAB Coder™ for exporting the developed algorithm to C and to implement this in the simulator, since the DUECA software. The resulting code however was not flexible and the whole algorithm was rewritten in C++ ultimately. This means that the final adjustments of the algorithm were done in the DUECA environment, resulting in a deviation from the algorithm as it was tested in the MATLAB® environment. On top of that, the available speeds display and haptic force rendering were solely developed in the DUECA environment. These two modules will be explained first.

1. The available speeds display

All the information that has been gathered at this point results in a conflict resolution strategy of guiding the operator towards a selected speed and heading, Vsaf e. In an attempt to aid the operator

in understanding the avoidance strategies that are generated by the CAS, a third display is added to the haptic interface. This display is egocentric, meaning that it is fixed to the body reference frame. The available speed display is constructed using the data that is acquired with the LiDAR

(12)

and is a direct visualization of the available speeds as calculated by the CAS. Beside the available speeds, the display also shows the current speed flown by the UAV and the selected safe speed. Furthermore the distance towards static obstacles is shown by a continuous line that is a direct translation of the sensor measurement data to expand the operators understanding of the system. If there are no measurements, the line is an undisturbed circle. Once a static obstacle is measured, the line will take on the outline of the measured obstacle. The navigational and available speeds displays are shown side by side in Figure9.

(a) The available speeds display when approaching a building. 1) is the current speed, 2) is the safe speed selected by the CAS, 3) is the line showing the distance measurement data from the LiDAR.

(b) The situation that belongs to the available speeds as portrayed in Figure9(a).

Figure 9. The available speeds display and navigation display alongside each other, displaying the same situation.

Whenever a moving obstacle is measured, the position relative to the UAV is indicated with a green cone. The cone encompasses the measured edges of the obstacle.

2. The haptic force rendering

The goal of the haptic feedback is to guide the operator towards the selected Vsaf e. When a conflict

occurs, the CAS will guide the operator out of the forbidden velocity map. The difference between the selected Vsaf e and the current Vconf lict can be expressed in polar coordinates as a change in

magnitude ∆V and yaw angle ∆ψ. These changes are then translated to desired stick inputs in the following way:

xstick =        −1 if ∆ψ <−0.32[rad] ∆ψ 0.32 if− 0.32[rad] ≤ ∆ψ ≤ 0.32[rad] 1 if ∆ψ > 0.32[rad] (5) ystick=        −1 if ∆V <−5[m/s] ∆V 5 if − 5[m/s] ≤ ∆V ≤ 5[m/s] 1 if ∆V > 5[m/s] (6)

(13)

where 1 and -1 represent the maximum stick deflection. The translation of a desired change of speed to a haptic signal is illustrated in Figure 10.

X-axis V [m/s] 5 0 Y -axis −5 −5 5 A B 1

(a) A is the current speed the UAV is flying and B is the Vsaf e proposed by the CAS. Both speeds are displayed in the body reference frame.

Y -axis ˙ ψmax X-axis 1 0 −1 −1 1 Vmax B+ A⊗ 1

(b) The change in u is 0.53 m/s and required change in heading because of the v component is 0.42 rad. The resulting haptic force is −0.535 × Klong in Y direction and −1 × Klatin X direction.

Figure 10. Example of a haptic force rendering as a result of a suggested Vsaf e. Figure10(a)shows the

speeds in the body reference axis and Figure10(b)shows the resulting haptic signal in the reference

frame of the stick.

The haptic force will use the xstick and ystick offsets to render a haptic force. The haptic forces

are modeled as a spring with different spring constants in longitudinal and lateral direction. After a few trial and error runs in the simulator, the constants chosen for this interface are:

Klat= 2.5 [N m]

Klong= 2.86 [N m]

To generate the haptic force, Klat and Klong are multiplied with the required displacements xstick

and ystick respectively:

Mx= xstick· Klat

My = ystick· Klong

The haptic signal is passed through a first order low pass filter before sending it to the side stick. Otherwise the operator would likely be exposed to large force deflections in a very short period of time. The filter was tuned to have a rise time of one second resulting in a gain K = 1 and time constant τ = 0.16 s. For a discrete time system with a sampling rate of 50 Hz the z transform gives:

Y [n] = 0.8825· Y [n − 1] + 0.1175 · X[n] 3. Module details of the offline model

The two main differences between the offline simulation and the HMI Lab simulator are that the MATLAB® simulation is two dimensional, not in real-time and the pilot is modeled with a PD controller (P = 0.1; D = 0.3) that looks at the target position error. The UAV model, autopilot, dynamic obstacles are all modeled in Simulink® and the VO CAS is called as a function from MATLAB.

(14)

The output of the CAS is the selected Vsaf e and is normalized by dividing it by the Vmax of

the UAV. The difference between the PD pilot command and CAS command is subtracted from the pilot command, resulting in the CAS command being the input for the UAV model. This is a simplification that omits the modeling of a NMS model with stick dynamics and haptic forces as used in Ref.10

The dynamic obstacles are represented by squares with a side length of 6 [m]. They commence their movement at the start of the simulation, moving linearly with no acceleration and a velocity of 5 [m/s].

The offline VO CAS receives the UAV’s position, velocity, previous Vsaf e, map details, time step

and dynamic obstacle positions. With each time step the environment is drawn and the LiDAR is simulated with a line intersection method. This method calculates the intersections measured closest to the UAV for the static and complete environment at that specific point in time. These intersections are then identified as map obstacles and all obstacles, as explained previously.

Furthermore the MATLAB®simulation included a trial in estimating the velocity of an obstacle by calculating the COG of its measurement points, based on the same method as used by Dong.35 The movement of this COG over time was monitored to produce an approximation of the obstacles speed. This required a high resolution (n = 15, 000), meaning that the computational cost was very high. Since the results were dependent on the relative position and approach path of the obstacles and always contained jitter, this approach was scrapped in the HMI Lab simulator. The resolution of the rays has also been drastically lowered in order to allow a simulation rate of 50[Hz] in the HMI Lab simulator.

Autopilot Determine Stick Input UAV Model VO Method CAS Dynamic Obstacle Positions Map Elements Vsaf e x, y, ˙x, ˙y δc Target(x, y) x, y, ˙x, ˙y

Figure 11. A schematic representation of the closed-loop autonomous system as used for the offline testing of the VO method.

4. Module details of the HMI Lab implementation

The HMI Lab simulator is much more elaborate than the offline version and is based on the same environment where the experiments were conducted by Lam.8 This is a 3D environment, but is scanned in a horizontal planar field. The environment is simulated with the OpenSceneGraph toolkit and the LiDAR measurements are mimicked with the LineSegmentIntersector utility. By applying different NodeMasks to map elements and dynamic obstacles, the LiDAR sensor module can differentiate between these objects.

The dynamic obstacles are triggered when the UAV approaches a ”tripwire”, this in contrast to the offline model where they would be allready be moving at t=0. This tripwire was spanned across the route the UAV had to follow and was placed in such a way that the UAV would always cross paths with the obstacle under normal circumstances. The obstacles themselves were represented

(15)

by a simple helicopter with a yellow fuselage, making them stand out against their environment. The used model is portrayed in Figure12.

Figure 12. The 3DS model of the moving obstacle as used in the simulator.

IV. Testing

A. Offline simulations setup

The VO method was developed and tested offline in the same simple autonomous system as used by Lam and Boschloo. Figure 13 shows a schematic representation of this system. In the end the AFF method is replaced with the VO method. Where the AFF based CAS would produce a risk vector to influence the steering command, the VO based CAS produced a desired heading and

speed change.34 ARTIFICIAL FORCE FIELD DESIGN 2.3

+ -+ -HU AV AFF visual feedback haptic feedback pilot model KU AV δc Mc target v, x, y

Figure 2.11: Schematic representation of the closed-loop autonomous system.

2.3 Off-line Simulations

This section describes off-line simulations with an autonomous system. The simulation consisted of a simulated pilot, a UAV helicopter model and a CAS. The CAS contains an artificial force field that generates repulsive forces that are fed back to the pilot. The two novel risk fields introduced above, the BRF and PRF, are evaluated with both normal and radial projection of the risk vectors. Several trajectories along various obstacles are used.

2.3.1 Setup

A schematic representation of the simulation model is shown in Figure 2.11. The model consisted of a visual feedback loop and a force feedback inner loop, both to the simulated pilot. The components are described below.

Forcing function

The forcing function, “target” in Figure 2.11, consisted of a fixed destination position that served as input to the closed-loop system.

Pilot

Figure 13. A schematic representation of the closed-loop autonomous system as used for offline testing

by Lam.7 The AFF method was replaced with the VO method.

Target input The “target” that enters the pilot model in Figure 13, is a destination towards which the pilot should fly.

Pilot Model The pilot model is composed of a PD controller (P = 0.1; D = 0.3) that receives the position error of the UAV and transforms it into a steering command Mc, which is limited to 1.

This control translates directly to a stick deflection, so no stick dynamics or NMS dynamics were modeled in this simulation.

CAS The CAS (in Figure 13 represented by the AFF) computes the Vsaf e and checks the error

between Vsaf e and VU AV. The output is the required correction signal to get the selected Vsaf e. Environment The CAS was subjected to environments with only static obstacles, only moving obstacles and an environment with both. All dynamic obstacles were moving linearly with maximum speeds of 5 [m/s] from all cardinal and inter-cardinal directions. The static obstacles were rectangles, or triangles with sharp edges and relatively narrow passages with a width of 4RU AV.

14 of26

(16)

B. Human-in-the-loop experiment

The experiment in the HMI Lab simulator was executed to verify that a haptic interface based on the VO method has potential as a CAS in a dynamic environment. The offline simulation has shown promising results with a simplified pilot model. These tests were however purely meant for developing the CAS algorithm and say nothing about the operators interaction with the interface. The human-in-the-loop experiment will help to get an insight in the effects on the operators mental and physical workload and how it influences his or her performance.

The experiment was performed with five participants, four male and one female. All the subjects were right handed and had no previous experience in the HMI Lab Simulator or in tele-operating a UAV.

Apparatus The experiment was conducted in the HMI Lab simulator, which is a fixed-base simulator. The control device used by the subjects to operate the UAV was a side-stick with electro-hydraulic actuators to provide force feedback.

Figure 14. The HMI Lab simulator where the experiments were conducted with: 1) the available speeds display 2) the navigation display 3) the side stick 4) the camera display.

The stick dynamics were the same as used in the previous interface: the stick inertia was Ist = 0.01 [kgm2] , damping coefficient Bst = 0.2 [N ms/rad] and the spring constant Kst =

2.0 [N m/rad]. These were defined at 0.09 [m] above the rotation axis which is an approximation of the position of the center of the hand. The same dynamics were used for the lateral and the longitudinal motion. Subjects were seated in an aircraft chair that was adjustable in height and distance to the interface monitors. The operator had three displays to monitor: the on-board camera-view, navigation display and the available speeds display. The on-board camera-view was projected onto a large white wall at a distance of about 3[m] in front of the operator. The navigation display was situated in front of the operator and the available speeds display slightly to the left hand side. Both displays were shown on LCD displays. The whole set-up is shown in Figure14. Simulated Environment The operator is asked to fly through an environment which is based on the same one used by Lam in his experiments. This environment consists of the same six subtasks designed by Lam repeated three times over three different sectors. The static obstacles were complemented with moving obstacles that moved in a linear manner and would spawn when the UAV would pass a certain trigger line. There were three different moving obstacles and they are portrayed in Figure 15. These obstacles are labeled intruder 1, 2 and 3 and their purpose are explained briefly here:

(17)

Intruder 1: This obstacle appears around the corner when the operator is performing the manoeuvre of flying along a wall. The goal was to test the effectiveness of the CAS on obstacles that appear suddenly and become visible to the operator on the very last moment.

Intruder 2: This obstacle would cross paths with the UAV in the open field and would lead to a collision if the operator wouldn’t adjust the heading or speed. The obstacle was in the operators FOV the whole time.

Intruder 3: This obstacle would approach the UAV head on, while flying through a corridor. The operator would have ample time (± 15 [s]) to get out of the corridor before the obstacle would actually cause a collision. This scenario was chosen to check the behavior of the CAS when a conflict is detected while situated in a confined space. 1 (a) Intruder 1. 1 (b) Intruder 2. 1 (c) Intruder 3. Figure 15. The three situations with a moving obstacle that each operator would encounter.

Procedure The participants received a document in advance, explaining the goal of the exper-iment and a short briefing. This document was recapped and the briefing was finalized just before the experiment. If the participants had no more questions and were aware of their rights as a research participant, they would give their informed consent.

Participants were instructed to fly the given course without causing a collision. The course was marked with waypoints that were represented by smoke plumes through which the operator had to navigate the UAV. In addition the participants were asked to fly through the waypoints as close as possible and to complete the course as fast as possible. If the UAV would have a collision, the screen would freeze for 5 seconds and the UAV would be placed back into the position just prior to the collision.

The experiment started with performing two training runs, one with and one without haptic feedback. This way the participants could familiarize themselves with the tasks, environment and the interface. After the two training runs, the participants would fly the measurement runs. To prevent the operators to build up a familiarity with the course and being able to anticipate moving obstacles, each course was different by changing the order of the three sectors and the spawn points of the intruders. After each run, participants were offered refreshments and had a short break to allow for sufficient rest. The participants were asked to fill in the questionnaires for measuring the subjective dependent variables.

(18)

Independent variables The only independent variable in this experiment was the haptic feed-back, which was either turned on or off. The subjects would all first fly the course without haptic feedback and then with the force feedback. The CAS was still active, so the available speeds display was usable in both situations.

Dependent variables Each run was evaluated on safety, operator performance, control activity, physical and mental workload. These were measured in a mix of explicit and subjective methods. Safety was measured in the number of collisions and separation distance towards the moving obsta-cles. The operator performance is measured by the amount of time needed to complete the course tel and how closely the operator passes through way-points set out on the course, Dwp. Control

activity is represented by the standard deviation of the moment exerted on the stick, σMh. The workload is measured with the help of a TLX rating scale.

C. Hypotheses

Before the testing was executed, the following hypotheses were defined:

The velocity obstacle method will perform better at avoiding moving obstacles than the artificial forcefield method

The first hypothesis serves as a justification for the change of method of obstacle avoidance. Since the start of the research the AFF method has been used as a successful algorithm for the col-lision avoidance system. The AFF as tested by Boschloo4 will be used to verify its effectiveness in avoiding moving obstacles. The expectation is that the AFF based CAS will fail to evade obstacles coming from the side and crossing paths from behind. The AFF method as used by Boschloo only considers the distance between the UAV and an obstacle as well as the UAVs velocity component towards the obstacle. If the obstacle approaches from the side, there will be no velocity component towards the obstacle and the CAS underestimates the risk severely.

The haptic feedback will increase the level of safety if compared to manual control The purpose of the haptic interface is to support the operator in high-risk situations by advis-ing him about his headadvis-ing and speed. Even if limited visual information is available, the haptic feedback is expected to be a valuable tool to guide the operator. This hypothesis considers the increase of safety as both a reduction in collisions and a greater separation of distances from moving obstacles. These are the explicit methods to measure safety. The operators will furthermore be questioned on how they experienced the run with or without haptic feedback, to see if the haptics increase the feeling of safety as well.

The haptic feedback will decrease the mental workload if compared to manual con-trol

This hypothesis follows the same reasoning as the safety aspect. The haptic support is expected to aid the operator in understanding the boundaries of his or her maneuverable space. Otherwise the operator has to interpret and integrate the available information on the displays, instead of sensing it through the side stick.

The haptic feedback will increase the physical workload if compared to manual control

(19)

The anticipation is that the haptic feedback will cause a higher physical workload, as the haptic feedback hasn’t been tuned with a model of the neuromuscular system beforehand. Additionally the CAS causes the stick to move instead of resisting the input.

V. Results A. Offline simulations 0 20 40 60 80 100 120 140 160 180 200 0 10 20 30 40 50 60 70 80 90 100

(a) The UAV manoeuvres through an obstacle filled environ-ment. -10 0 10 20 30 40 50 60 70 80 90 100 -20 -10 0 10 20 30 40 50

(b) The UAV avoids a side collision.

0 10 20 30 40 50 60 70 80 90 100 -15 -10 -5 0 5 10 15 20

(c) The UAV avoids a head on collision.

Figure 16. Some of the offline simulations results with the VO method. The gray blocks are the

trail of the moving ones is marked with crosses. The path of the UAV is marked with circles. The positions are displayed with 2 seconds intervals.

The prediction was that the used AFF method would not be suitable for avoiding moving obstacles, since it does not take the state of an obstacle into account. The result was that in order to be able to avoid moving obstacles, the AFF would require a protection zone of at least 10 [m]. This means it would be hard to approach static obstacles. Furthermore, obstacles that move along the x-axis of the UAV caused no evasive manoeuvres. The risk calculation for obstacles approaching from the side showed no successful evasions either, as shown in Figure17.

Now with the AFF having shown its limitations when trying to avoid the moving obstacles, the developed VO algorithm was investigated. The VO method proved successful in avoiding both static and dynamic obstacles. Some of the results are shown in Figure16

B. Human-in-the-loop Evaluation Results

Since the results were gathered with only a small group of subjects (n = 5), the sample size did not allow for proper statistical inference. Nevertheless a Friedmann test was performed on the amount

(20)

0 20 40 60 80 100 -20 -10 0 10 20 30 40 50 t = 9.1 position = 20.397 , 0.000 velocity = 4.000

(a) The obstacle enters the riskfield from the left hand side at 5 m distance from the UAV.

0

20

40

60

80

100

-20

-10

0

10

20

30

40

50

t

= 10.1

position = 24.397 , -0.038

velocity

= 4.005

(b) The obstacle is inside the PZ of the UAV and the operator re-acts.

0

20

40

60

80

100

-20

-10

0

10

20

30

40

50

t

= 10.6

position = 26.397 , -0.200

velocity

= 4.025

(c) The reaction came to late, result-ing in a collision.

Figure 17. The UAV that uses the AFF riskfield algorithm looks ahead in the direction of flight

(green arrow). The AFF reacts to the obstacle moving 5 [m/s] when it enters the PZ (red arrow).

of collisions to see whether the haptic feedback did have an effect on this aspect. The box plots have their maximum whisker length specified as 1.5 times the interquartile range, with everything outside being defined as outliers.

Safety Safety was measured by counting the number of collisions and measuring the minimum separation between the UAV and the moving obstacles. Furthermore, the participants were asked how aware they felt about possible dangers. The results of this questionnaire were normalized and are shown in Figure18. The number of collisions was subjected to a Friedmann test to see if the force feedback had any effect on the amount of collisions Ncollisions. The test succeeded to reject the

null hypothesis with a significance level of 0.05 (α = 0.0455), proving that the haptic feedback has an influence on Ncollisions. The total amount of collisions is shown in Figure19(a) and the amount

of collisions due to moving obstacles is shown in Figure19(b). It is interesting to see that the force feedback prevented collisions with intruders 1 and 2 but seems to have caused them with intruder 3. This was the scenario in which the UAV was in an alley while an obstacle is approaching from the front.

Without Haptics Haptic Feedback -1

0 1 2

Danger awareness

Figure 18. The participants subjectively perceived awareness of possible dangers for the two settings. Figure 20 shows the minimum separation Dmin between the UAV and the intruders measured

from the UAVs center. The bars below the line indicate a collision while loss of separation hap-pened below 5 [m]. It is noteworthy that there haven’t been any collisions with dynamic obstacles,

(21)

force feedback without haptics 0 5 10 15 20 N collisions participant 1 participant 2 participant 3 participant 4 participant 5

(a) The total number of collisions.

force feedback without haptics 0 1 2 3 N collisions Intruder 1 Intruder 2 Intruder 3

(b) Collisions due to moving obstacles. Figure 19.

meaning that the collisions due to intruder 3 shown in Figure19(b)were with the wall and not the obstacle itself. 0 5 10 15 0 5 10 15

D

min

[m]

without haptics force feedback

Figure 20. The minimum separations between the UAV and moving obstacles. Note that these are sorted measurements and not paired per intruder. Whenever the separation was below 1.6 [m] it would result in a collision. This distance is represented by the horizontal line.

Performance The operator performance was measured by the required time for the whole run and the minimum distance to the waypoints Dwp. Figure 21(a) shows the Dwp per participant

per setting, while Figure 21(b) shows the average. The amount of time the participants needed for completing their run is displayed in Figure22. Unfortunately not all participants were able to finish their run due to sudden crashes of the simulator. For participants two and three there are no lap times for their flight with the force feedback, while for participant four the simulator crashed prematurely when the haptics were disabled.

Control activity The control activity is represented by the standard deviation of the moment exerted on the stick, σMh. The deviation in measured forces on the stick has been differentiated to the exerted moments in the lateral (stick X− axis) and longitudinal (stick Y − axis) directions. The results are summarized in table 1. The high value of the measured moment in longitudinal direction suggests that the participants were counteracting the haptic forces on the stick.

Upon closer inspection of the data it is found that the participants responded well to the haptic warnings for intruder 1 and 2. When looking at the data for the reverse parking subtask in Figure

(22)

1 NHF 1 HF 2 NHF 2 HF 3 NHF 3 HF 4 NHF 4 HF 5 NHF 5 HF 0 1 2 3 4 5 6 Dwp [m]

(a) The closest approach per person.

Without Haptics Force Feedback 0 2 4 6 D wp [m]

(b) The average approach Figure 21. The approach distance to the waypoints.

1 2 3 4 5

participants

0 100 200 300 400 500

t

el

[s]

Force Feedback No Haptics

Figure 22. The time that participants needed to complete a lap. For Participants 2 and 3 the simulator crashed midway with the force feedback and for Participant 4 the simulator crashed in the run without the haptic feedback.

23(b), and evasion of intruder 3 the operators clearly struggle with the haptic warning signals and in some cases overrule them. This is illustrated with a few examples in Figure 23. Figure 23(a) shows the operator steering against the haptic signal, resulting in a high measured moment on the stick.

lateral direction longitudinal direction

Interface Setting: σMh[Nm] Maximum [Nm] σMh [Nm] Maximum [Nm]

force feedback 0.48 5.0 0.76 7.7

without haptics 0.45 3.7 0.45 3.7

Table 1. The standard deviations of the measured moments on the stick in lateral and longitudinal direction along with the maximum moments that were exerted on the stick.

(23)

190 195 200 205 time [s] -6 -4 -2 0 2 4

6 Moment offset [Nm]Moment measured [Nm] Stick postision [-] Desired stick postision [-]

190 195 200 205 time [s] -4 -2 0 2 4 6

8 Moment offset [Nm]Moment measured [Nm]

Stick postision [-] Desired stick postision [-]

(a) Left are the lateral measurements and right are the longitudinal measurements of Participant 3 when encountering Intruder 3. 100 105 110 115 time [s] -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 Moment offset in X [Nm] Moment measured in X [Nm] Stick postision in X [-] Desired stick postision in X [-]

100 105 110 115 time [s] -2 -1 0 1 2 3 4 5 6 Moment offset in Y [Nm] Moment measured in Y [Nm] Stick postision in Y [-] Desired stick postision in Y [-]

(b) These are the measurements of Participant 2 first reverse parking attempt.

13 14 15 16 17 18 19 20 21 22 time [s] -3 -2 -1 0 1 2 3 Moment offset [Nm] Moment measured [Nm] Stick postision [-] Desired stick postision [-]

(c) This shows Participant 3 reacting positively to the haptic signal in longitudinal direction when encoun-tering intruder 1.

Figure 23. the approach distance to the waypoints

Workload The appraisal of the operators workload was done with two forms, the NASA TLX form and a custom form based on the one that Lam used in his research. The latter questionnaire was aimed to determine how the participants experienced the haptic forces. The results of the TLX form have been put in Figure24. In Lam’s questionnaire the mental workload experience was rated by asking how difficult it was to plan an evasive manoeuvre. The physical workload was assessed by letting participants rate how difficult it was to manoeuvre in a closely spaced environment. The results of the custom form are shown in Figure 25.

(24)

1 2 1: no haptics, 2: with haptics -1 -0.5 0 0.5 1 1.5

normalized mental demand

(a) MentalWorkload.

1 2

1: no haptics, 2: with haptics -1 -0.5 0 0.5 1 1.5

normalized physical demand

(b) PhysicalWorkload.

1 2

1: no haptics, 2: with haptics -2 -1.5 -1 -0.5 0 0.5 1

normalized temporal demand

(c) TemporalDemand.

1 2

1: no haptics, 2: with haptics -1.5 -1 -0.5 0 0.5 1 1.5 normalized performance (d) Performance. 1 2

1: no haptics, 2: with haptics -1 -0.5 0 0.5 1 1.5 normalized effort (e) Effort. 1 2

1: no haptics, 2: with haptics -1 -0.5 0 0.5 1 1.5 normalized frustration (f) Frustration. 1 2

1: no haptics, 2: with haptics -0.5 0 0.5 1 1.5 2 2.5

mean normalized workload

(g) Overall Workload.

Figure 24. The normalized results from the TLX workload test. The maximum whisker length is defined as 1.5 times the interquartile range, the red crosses are outliers.

Without Haptics Haptic Feedback

-1 0 1 2 Mental Workload (a)

Without Haptics Haptic Feedback

-1 -0.5 0 0.5 1 1.5 2 Physical Workload (b)

Figure 25. The normalized results for the questionnaire based on Lams research.

VI. Discussion

The hypothesis was that the haptic interface based on the VO method would increase the safety, workload and performance. The offline simulation showed promising results in evading moving obstacles but wasn’t fully comparable to the actual HMI due to the simplistic operator model and lack of a NMS model. The HMI-Lab simulator experiment shows that the current haptic interface does help in the avoidance of moving obstacles. The separation between the UAV

(25)

and moving obstacles increases in an overall perspective and no collisions with intruder 1 and 2 occurred while the force feedback was turned on. The operators however did not experience a noticeable difference in awareness of dangerous manoeuvres according to the questionnaire results. And even though the Friedmann test showed that the haptic forces have an effect on lowering the amount of collisions, they do seem to cause collisions when operating in tight spaces. When looking at the plots in Figure 23 of the reverse parking and evasion of intruder 3, they have one thing in common and that is the alternating behavior of the desired stick position in lateral direction. The CAS has a very narrow selection of available speeds when operating in a snug passage. When the flight direction will result in a conflict, the angle of error ψerror will determine the stick deflection.

The maximum lateral haptic force is exerted when ψerror ≥ 0.32[rad], meaning an error of only

just over 18◦ causes the full haptic force to be exerted on the operators hand. This results in an

overshoot and since the UAV is in a tight space, the overshoot causes a ψerror to the other side. On

top of that, the algorithm was developed for a 50 [Hz] simulation rate, but it was found afterwards that the tests were performed at 100 [Hz]. This means the low pass filter would have a rise time of 0.5 seconds instead of one.

When looking at the performance there seems to be no significant effect caused by the haptic feedback. The missing data for the lap times also makes it hard to see a distinct effect. The inspection of the control activity shows that the haptic interface can be counter intuitive at times. Apart from the problems occurring on the lateral stick movement due to the hard controller, the lack of a time horizon for moving obstacles seems to be also causing problems. When evading intruder 3, operators had plenty of time to move out of the alley and go around the intruder. The CAS however detects the conflict and selects the only available safe speed to its knowledge: reverse. Despite the counterintuitive behavior of the CAS in narrow spaces, the frustration level didn’t see an increase according to the TLX test. The haptic interface does seem to decrease mental work-load and increase physical workwork-load in the TLX test, but results from the questionnaire designed by Lam suggests otherwise. Figure 25(b) suggests that the haptic feedback reduces the physical workload. A possible explanation is that the formulation of the question was not clear.

VII. Conclusions

This research presented a haptic interface based on the VO method to help avoid linearly moving obstacles in the planar field. This interface computes the forbidden velocity zones and guides the operator out of them or prevents him or her from entering them.

• The interface successfully prevented collisions with moving obstacles in the open field and obstacles that suddenly appeared from behind a building. The separation distance between the UAV and moving obstacles was also bigger when the haptic feedback was engaged. • The current haptic force generation causes problems when operating in a tight space.

Op-erators often had to counteract the forces in lateral direction when trying to reverse into an alley or flying through a corridor with an obstacle heading at the entrance.

• The overall workload stayed the same but the haptic feedback caused larger physical workload whereas the manual control saw slightly higher mental demand.

VIII. Recommendations and future work

With the first trial of the VO for a haptic interface it was found that the haptic forces in the lateral direction were too sensitive and too strong. Since changing the heading is hardly as urgent as a emergency stop, it is predicted that making the controller softer for the lateral stick movement

(26)

won’t have a negative influence on preventing collisions. Furthermore, the current method of projection of the PZ onto a static obstacle works for a straight wall, but for more non-uniform obstacles with sharp edges, the dmin can not be guaranteed. Another recommendation is the

implementation of a time horizon for moving obstacles or scale the generated forces based on the time to collision. This should increase the available evasion manoeuvres to the operator and reduce the physical workload in the case of the intruder 3 scenario.

For future work it is recommended to verify the use of the displays with a larger group of sub-jects. Operators attention have to divide their attention over three displays, which can negatively affect the performance. Something else that is interesting to investigate is the possibility of making the system more independent by calculating the obstacles’ states. This can be done with computer vision and using the LiDAR data to construct point clouds.

References

1

Custers, B. H. M., Oerlemans, J. J., and Vergouw, S. J.,Het gebruik van drones, Boom Lemma Uitgevers, 2015, in Dutch.

2

Murphy, R. R., “A decade of rescue robots,”International Conference on Intelligent Robots and Systems,, Oct. 2012, pp. 5448–5449.

3

McCarley, J. S. and Wickens, C. D.,Human factors implications of UAVs in the national airspace, University of Illinois, 2005.

4

Boschloo, H. W., Lam, T. M., Mulder, M., and Van Paassen, M. M., “Collision Avoidance System for a Remotely-Operated Helicopter using Haptic Feedback,”Proc. of the IEEE Conference on Systems, Man, & Cyber-netics (IEEE - SMC), The Hague, The Netherlands, October 10 - 13 , 2004, pp. 229–235.

5Lam, T. M., Mulder, M., and Van Paassen, M. M., “Collision Avoidance in UAV Tele-Operation with Time Delay,”Proc. of the IEEE Conference on Systems, Man, & Cybernetics (IEEE - SMC), Montreal, Canada, October 8 - 11 , 2007, pp. 997–1002.

6

Lam, T. M., Mulder, M., and Van Paassen, M. M., “Haptic Feedback for UAV Tele-operation – Force Offset and Spring Load Modification,” Proc. of the IEEE Conference on Systems, Man, & Cybernetics (IEEE - SMC), Taipei, Taiwan, October 8 - 11 , 2006, pp. 1618–1623.

7

Lam, T. M., Mulder, M., and Van Paassen, M. M., “Haptic Interface for UAV Collision Avoidance,” The International Journal of Aviation Psychology, Vol. 17, No. 2, 2007, pp. 167–195.

8

Lam, T. M., Mulder, M., and Van Paassen, M. M., “Haptic Feedback in UAV Tele-operation with Time Delay,” Journal of Guidance, Control & Dynamics, Vol. 31, No. 6, 2008, pp. 1728–1739.

9

Lam, T. M., Mulder, M., and Van Paassen, M. M., “Haptic Interface in UAV Tele-operation using Force-Stiffness Feedback,”Proc. of the IEEE Conference on Systems, Man, & Cybernetics (IEEE - SMC), San Antonio (TX), October 11-14 , 2009, pp. 851–856.

10Lam, T. M., Boschloo, H. W., Mulder, M., and Van Paassen, M. M., “Artificial Force Field for Haptic Feedback in UAV Tele-operation,”IEEE Transactions of Systems, Man & Cybernetics, Part A, Vol. 39, No. 6, 2009, pp. 1316– 1330.

11

Smisek, J., Sunil, E., Van Paassen, M. M., Abbink, D. A., and Mulder, M., “Neuromuscular-System-Based Tun-ing of a Haptic Shared Control Interface for UAV Teleoperation,”IEEE Transactions on Human-Machine Systems, Vol. 47, No. 4, 2017, pp. 449–461.

12Rakotomamonjy, T. and Binet, L., “Using Haptic Feedbacks for Obstacle Avoidance in Helicopter Flight,”6th European Conference for Aeronautics and Space Sciences, 2015.

13

Duberg, D.,Safe Navigation of a Tele-operated Unmanned Aerial Vehicle, Master’s thesis, School of Computer Science and Communication KTH, 2018.

14

Brandt, A., “Haptic Collision Avoidance for a Remotely Operated Quadrotor UAV in Indoor Environments,” IEEE International Conference on Systems, Man and Cybernetics, 2010.

15

Roberts, A., “Haptic feedback and visual servoing of teleoperated unmanned aerial vehicle for obstacle aware-ness and avoidance,”International Journal of Advanced Robotic Systems, 2017.

16

Chakravarthy, A. and Ghose, D., “Obstacle avoidance in a dynamic environment: a collision cone approach,” Systems, Man and Cybernetics, Vol. 28, No. 5, Sept. 1998, pp. 562 – 574.

17

Van Dam, S. B. J., Mulder, M., and Van Paassen, M. M., “Ecological Interface Design of a Tactical Airborne Separation Assistance Tool,”IEEE Trans. on Systems, Man & Cybernetics, Part A, Vol. 38, No. 6, 2008, pp. 1221– 1233.

(27)

18Ellerbroek, J., Brantegem, K. C. R., Van Paassen, M. M., and Mulder, M., “Design of a Coplanar Airborne Separation Display,”IEEE Transactions on Human-Machine Systems, Vol. 43, No. 3, 2013, pp. 277–289.

19Mercado-Velasco, G. A., Borst, C., Ellerbroek, J., Van Paassen, M. M., and Mulder, M., “The Use of Intent Information in Conflict Detection and Resolution Models Based on Dynamic Velocity Obstacles,”IEEE Transactions on Intelligent Transportation Systems, Vol. 16, No. 4, 2015, pp. 2297–2302.

20

Ali, A., “A Novel Obstacle Avoidance Control Algorithm in a Dynamic Environment,” Recent Advances in Electrical Engineering Series, Vol. 11, 2013, pp. 44–49.

21

Large, F., Vasquez, D., Fraichard, T., and Laugier, C., “Avoiding cars and pedestrians using velocity obstacles and motion prediction,”IEEE Intelligent Vehicles Symposium, 2004.

22

Payandeh, S. and Stanisic, Z., “On Application of Virtual Fixtures as an Aid for Telemanipulation and Train-ing,”Proc. of 10th Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems, 2002.

23Massimino, M. and Sheridan, T. B., “Teleoperator performance with varying force and visual feedback,”Human factors, Vol. 36, 04 1994, pp. 145–57.

24Abbink, D. A. and Mulder, M., “Exploring the Dimensions of Haptic Feedback Support in Manual Control,” Journal of Computing and Information Science in Engineering, Vol. 9, No. 1, March 2009, pp. 011006–011015.

25Abbink, D. A., Mulder, M., and Boer, E. R., “Haptic shared control: smoothly shifting control authority?” Cognition, Technology & Work , Vol. 14, No. 1, Mar 2012, pp. 19–28.

26van Beek, F. E.,Making sense of haptics: fundamentals of haptic perception and their implications for haptic device design, Ph.D. thesis, Vrije Universiteit Amsterdam, 2016.

27

de Jonge, A. W., Wildenbeest, J. G. W., Boessenkool, H., and Abbink, D. A., “The Effect of Trial-by-Trial Adaptation on Conflicts in Haptic Shared Control for Free-Air Teleoperation Tasks,”IEEE Transactions on Haptics, Vol. 9, No. 1, 2016, pp. 111–120.

28Lee, S., Sukhatme, G., Jounghyun, K., and Park, C. M., “Haptic control of a mobile robot: a user study,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 3, IEEE, 2002, pp. 2867 – 2874.

29Darken, R. P. and Cevik, H., “Map usage in virtual environments: orientation issues,” Proceedings IEEE Virtual Reality, 1999.

30Ferland, F., Pomerlau, F., Michaud, F., and Dinh, C. T. L., “Egocentric and Exocentric Teleoperation Interface using Real-time, 3D Video Projection,”Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, 2009.

31

Chen, J. Y. C., Haas, E. C., and Barnes, M. J., “Human performance issues an user interface design for teleoperated robots,”IEEE transactions on systems, man and cybernetics, Vol. 37, No. 6, Nov. 2007, pp. 1231–1245.

32

Fod, A., Howard, A., and Mataric, M. A. J., “A laser-based people tracker,”Proceedings 2002 IEEE Interna-tional Conference on Robotics and Automation, 2002.

33Schulz, D., Burgard, W., Fox, D., and Cremers, A. B., “Tracking multiple moving targets with a mobile robot using particle filters and statistical data association,” Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation, 2001.

34

Damas, B. and Santos-Victor, J., “Avoiding moving obstacles: the forbidden velocity map,”Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2009.

35

Dong, H., Giakoumidis, N., Figueroa, N., and Mavridis, N., “Approaching Behaviour Monitor and Vibration Indication in Developing a General Moving Object Alarm System (GMOAS),” International Journal of Advanced Robotic Systems, Vol. 10, No. 7, 2013, pp. 290.

Cytaty

Powiązane dokumenty

Driving behavior was assessed for obstacle avoidance by using a previously developed continuous haptic feedback system (Scholtens et al., 2018) in combination with a novel

W testamencie króla Korybuta znalazło się polecenie ukończenia budowy kościoła bielańskiego oraz złożenia w nim jego serca i matki Gryzeldy.. 10

Wspólny dla tych rozpoznań jest jeszcze jeden wniosek: dla przestrzennego doświadczenia pokolenia postpamięci kluczowa staje się nieprzystawalność obserwowanych krajobrazów,

Wiary nie tylko w to, że On może nas uzdrowić i pocieszyć, czy nawet wskrzesić, ale wiary, która pozwoli nam się prawdziwie nawrócić; która odkryje przed nami i dla

In this section, we will evaluate the minimal total amount of noise that needs to be added to a shortest path in order for it to change.. This amount of noise can

not alter the shortest path and hence we choose for α → ∞ a scaled perturbed graph, with 50% of the link weights 0 and 50% of the link weights uniformly distributed in the range

W związku z deklaracjami Prezesa UOKiK, wpro- wadzeniem programu dla sygnalistów i wejściem w życie ustawy o roszczeniach związanych z na- ruszeniem prawa konkurencji

od non habet modo homo, non habet felici- tatem de primo bonum potest dici divinum dupliciter essentialiter, quia essentialiter est res divina et sic Deus est bonum divinum et