• Nie Znaleziono Wyników

My body is a cage: the role of morphology in graph-based incompatible control

N/A
N/A
Protected

Academic year: 2021

Share "My body is a cage: the role of morphology in graph-based incompatible control"

Copied!
21
0
0

Pełen tekst

(1)

My body is a cage: the role of morphology in graph-based incompatible control

Kurin, Vitaly; Igl, Maximilian; Rocktäschel, Tim; Böhmer, Wendelin; Whiteson, Shimon

Publication date 2021

Document Version Final published version Published in

International Conference on Learning Representations (ICLR)

Citation (APA)

Kurin, V., Igl, M., Rocktäschel, T., Böhmer, W., & Whiteson, S. (2021). My body is a cage: the role of morphology in graph-based incompatible control. In International Conference on Learning Representations (ICLR) https://openreview.net/forum?id=N3zUDGN5lO

Important note

To cite this publication, please use the final published version (if applicable). Please check the document version above.

Copyright

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy

Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.

This work is downloaded from Delft University of Technology.

(2)

M

Y

B

ODY IS A

C

AGE

:

THE

R

OLE OF

M

ORPHOLOGY

IN

G

RAPH

-B

ASED

I

NCOMPATIBLE

C

ONTROL

Vitaly Kurin

Department of Computer Science University of Oxford

Oxford, United Kingdom

vitaly.kurin@cs.ox.ac.uk

Maximilian Igl

Department of Computer Science University of Oxford

Oxford, United Kingdom

maximilian.igl@eng.ox.ac.uk

Tim Rockt¨aschel

Department of Computer Science University College London London, United Kingdom

t.rocktaschel@cs.ucl.ac.uk

Wendelin B¨ohmer

Department of Software Technology Delft University of Technology Delft, Netherlands

j.w.bohmer@tudelft.nl

Shimon Whiteson

Department of Computer Science University of Oxford

Oxford, United Kingdom

shimon.whiteson@cs.ox.ac.uk

A

BSTRACT

Multitask Reinforcement Learning is a promising way to obtain models with better performance, generalisation, data efficiency, and robustness. Most existing work is limited to compatible settings, where the state and action space dimensions are the same across tasks. Graph Neural Networks (GNN) are one way to address in-compatible environments, because they can process graphs of arbitrary size. They also allow practitioners to inject biases encoded in the structure of the input graph. Existing work in graph-based continuous control uses the physical morphology of the agent to construct the input graph, i.e., encoding limb features as node labels and using edges to connect the nodes if their corresponded limbs are physically connected. In this work, we present a series of ablations on existing methods that show that morphological information encoded in the graph does not improve their performance. Motivated by the hypothesis that any benefits GNNs extract from the graph structure are outweighed by difficulties they create for message pass-ing, we also proposeAMORPHEUS, a transformer-based approach. Further results show that, whileAMORPHEUSignores the morphological information that GNNs encode, it nonetheless substantially outperforms GNN-based methods that use the morphological information to define the message-passing scheme.

1

I

NTRODUCTION

Multitask Reinforcement Learning (MTRL) (Vithayathil Varghese & Mahmoud, 2020) leverages commonalities between multiple tasks to obtain policies with better returns, generalisation, data efficiency, or robustness. Most MTRL work assumes compatible state-action spaces, where the dimensionality of the states and actions is the same across tasks. However, many practically impor-tant domains, such as robotics, combinatorial optimization, and object-oriented environments, have incompatiblestate-action spaces and cannot be solved by commonMTRLapproaches.

Incompatible environments are avoided largely because they are inconvenient for function approxi-mation: conventional architectures expect fixed-size inputs and outputs. One way to overcome this limitation is to use Graph Neural Networks (GNNs) (Gori et al., 2005; Scarselli et al., 2005; Battaglia et al., 2018). A key feature ofGNNs is that they can process graphs of arbitrary size and thus, in

(3)

principle, allowMTRLin incompatible environments. However,GNNs also have a second key fea-ture: they allow models to condition on structural information about how state features are related, e.g., how a robot’s limbs are connected. In effect, this enables practitioners to incorporate additional domain knowledge where states are described as labelled graphs. Here, a graph is a collection of labelled nodes, indicating the features of corresponding objects, and edges, indicating the relations between them. In many cases, e.g., with the robot mentioned above, such domain knowledge is readily available. This results in a structural inductive bias that restricts the model’s computation graph, determining how errors backpropagate through the network.

GNNs have been applied toMTRLin continuous control environments, a staple benchmark of mod-ern Reinforcement Learning (RL), by leveraging both of the key features mentioned above (Wang et al., 2018; Huang et al., 2020). In these two works, the labelled graphs are based on the agent’s physical morphology, with nodes labelled with the observable features of their corresponding limbs, e.g., coordinates, angular velocities and limb type. If two limbs are physically connected, there is an edge between their corresponding nodes. However, the assumption that it is beneficial to restrict the model’s computation graph in this way has to our knowledge not been validated.

To investigate this issue, we conduct a series of ablations on existingGNN-based continuous control methods. The results show that removing morphological information does not harm the performance of these models. In addition, we proposeAMORPHEUS, a new continuous control MTRL method based on transformers (Vaswani et al., 2017) instead ofGNNs that use morphological information to define the message-passing scheme.AMORPHEUSis motivated by the hypothesis that any benefit GNNs can extract from the morphological domain knowledge encoded in the graph is outweighed by the difficulty that the graph creates for message passing. In a sparsely connected graph, crucial state information must be communicated across multiple hops, which we hypothesise is difficult in practice to learn.AMORPHEUSuses transformers instead, which can be thought of as fully connected GNNs with attentional aggregation (Battaglia et al., 2018). Hence, AMORPHEUSignores the mor-phological domain knowledge but in exchange obviates the need to learn multi-hop communication. Similarly, in Natural Language Processing, transformers were shown to perform better without an explicit structural bias and even learn such structures from data (Vig & Belinkov, 2019; Goldberg, 2019; Tenney et al., 2019; Peters et al., 2018).

Our results on incompatibleMTRLcontinious control benchmarks (Huang et al., 2020; Wang et al., 2018) strongly support our hypothesis: AMORPHEUS substantially outperforms GNN-based alter-natives with fixed message-passing schemes in terms of sample efficiency and final performance. In addition,AMORPHEUSexhibits nontrivial behaviour such as cyclic attention patterns coordinated with gaits.

2

B

ACKGROUND

We now describe the necessary background for the rest of the paper.

2.1 REINFORCEMENTLEARNING

A Markov Decision Process (MDP) is a tuple hS, A, R, T , ρ0i. The first two elements define the set of states S and the set of actions A. The next element defines the reward function R(s, a, s0) with s, s0 ∈ S and a ∈ A. T (s0|s, a) is the probability distribution function over states s0∈ S after taking action a in state s. The last element of the tuple ρ0is the distribution over initial states. Task and environment are synonyms forMDPs in this work.

A policy π(a|s) is a mapping from states to distributions over actions. The goal of anRLagent is to find a policy that maximises the expected discounted cumulative return J = E P∞t=0γ

trt, where γ ∈ [0, 1) is a discount factor, t is the discrete environment step and rtis the reward at step t. In the MTRL setting, the agent aims to maximise the average performance across N tasks: N1 PNi=1Ji. We useMTRLreturnto denote the average performance across the tasks.

In this paper, we assume that states and actions are multivariate, but dimensionality remains constant for oneMDP: s ∈ Rk, ∀s ∈ S, and a ∈ Rk0, ∀a ∈ A. We use dim(S) = k and dim(A) = k0 to denote this dimensionality, which can differ amongstMDPs. We consider two tasksMDP1and MDP2as incompatible if the dimensionality of their state or action spaces disagree, i.e., dim(S1) 6=

(4)

dim(S2) or dim(A1) 6= dim(A2) with the subscript denoting a task index. In this caseMTRL policies or value functions can not be represented by a Multi-layer Perceptron (MLP), which requires fixed input dimensions. We do not have additional assumptions on the semantics behind the state and action set elements and focus on the dimensions mismatch only.

Our approach, as well as the baselines in this work (Wang et al., 2018; Huang et al., 2020), use Policy Gradient (PG) methods (Peters & Schaal, 2006). PG methods optimise a policy using gradient ascent on the objective: θt+1= θt+ α∇θJ |θ=θt, where θ parameterises a policy. Often, to reduce variance in the gradient estimates, one learns a critic so that the policy gradient becomes ∇θJ (θ) = E PtA

π

t ∇θlog πθ(at|st), where Aπt is an estimate of the advantage function (e.g., TD residual rt+ γVπ(st+1) − Vπ(st)). The state-value function Vπ(s) is the expected discounted return a policy π receives starting at state s. Wang et al. (2018) use PPO (Schulman et al., 2017), which restricts a policy update to avoid instabilities from drastic changes in the policy behaviour. Huang et al. (2020) useTD3(Fujimoto et al., 2018), aPGmethod based onDDPG(Lillicrap et al., 2016).

2.2 GRAPHNEURALNETWORKS FORINCOMPATIBLEMULTITASKRL

GNNs can address incompatible environments because they can process graphs of arbitrary sizes and topologies. AGNN is a function that takes a labelled graph as input and outputs a graph G0 with different labels but the same topology. Here, a labelled graph G := hV, Ei consists of a set of vertices vi ∈ V, labelled with vectors vi

∈ Rmv and a set of directed edges eij ∈ E from vertex vi to vj, labelled with vectors eij

∈ Rme. The output graph G0 has the same topology but the labels can be of different dimensionality than the input, that is, v0i ∈ Rm0v and e0ij

∈ Rm0e. By graph topology we mean the connectivity of the graph, which can be represented by an adjacency matrix, a binary matrix {a}ij whose elements aij equal to one iff there is an edge eij ∈ E connecting vertices vi, vj ∈ V.

A GNN computes the output labels for entities of type k by parameterised update functions φkψ represented by neural networks that can be learnt end-to-end via backpropagation. These updates can depend on a varying number of edges or vertices, which have to be summarised first using aggregation functionsthat we denote ρ. Apart from their ability to operate on sets of elements, aggregation functions should be permutation invariant. Examples of such aggregation functions include summation, averaging and max or min operations.

IncompatibleMTRLfor continuous control implies learning a common policy for a set of agents with different number of limbs and connectivity of those limbs, i.e. morphology. To be more precise, a set of incompatible continuous control environments is a set ofMDPs described in Section 2.1. When a state is represented as a graph, each node label contains features of its corresponding limb, e.g., limb type, coordinates, and angular velocity. Similarly, each factor of an action set element corresponds to a node with the label meaning the torque for a joint to emit. The typical reward function of a MuJoCo (Todorov et al., 2012) environment includes a reward for staying alive, distance covered, and a penalty for action magnitudes.

We now describe two existing approaches to incompatible control: NERVENET(Wang et al., 2018) and Shared Modular Policies (SMP) (Huang et al., 2020).

2.2.1 NERVENET

InNERVENET, the input observations are first encoded via aMLPprocessing each node labels as a batch element: vi ← φχ vi, ∀vi ∈ V. After that, the message-passing part of the model block performs the following computations (in order):

e0ij ← φe ψ vi  , ∀eij ∈ E , vi φv ξ vi, ρ{e0ki| eki∈ E}  , ∀vi ∈ V . The edge updater φe

ψinNERVENETis anMLPwhich does not take the receiver’s state into account. Using only one message pass restricts the learned function to local computations on the graph. The node updater φvξ is a Gated Recurrent Unit (GRU) (Cho et al., 2014) which maintains the internal state when doing multiple message-passing iterations, and takes the aggregated outputs of the edge updater for all incoming edges as inputs. After the message-passing stage, theMLPdecoder takes the states of the nodes and, like the encoder, independently processes them, emitting scalars used as

(5)

the mean for the normal distribution from which actions are sampled: vdeci ← φη vi, ∀vi ∈ V. The standard deviation of this distribution is a separate state-independent vector with one scalar per action.

2.2.2 SHAREDMODULARPOLICIES

SMP is a variant of a GNNthat operates only on trees. Computation is performed in two stages: top-down and bottom-up. In the first stage, information propagates level by level from leaves to the root with parents aggregating information from their children. In the second stage, information propagates from parents to the leaves with parents emitting multiple messages, one per child. The policy emits actions at the second stage of the computation together with the downstream messages. Instead of a permutation invariant aggregation, the messages are concatenated. This, as well as separate messages for the children, also injects structural bias to the model, e.g., separating the messages for the left and right parts of robots with bilateral symmetry. In addition, its message-passing schema depends on the morphology and the choice of the root node. In fact, Huang et al. (2020) show that the root node choice can affect performance by 15%.

SMP trains a separate model for the actor and critic. An actor outputs one action per non-root node. The critic outputs a scalar per node as well. When updating a critic, a value loss is computed independently per each node with targets using the same scalar reward from the environment.

2.3 TRANSFORMERS

Transformers can be seen asGNNs applied to fully connected graphs with the attention as an edge-to-vertex aggregation operation (Battaglia et al., 2018). Self-attention used in transformers is an associative memory-like mechanism that first projects the feature vector of each node vi

∈ Rmv into three vectors: query qi := Θvi

∈ Rd, key ki := ¯Θvi

∈ Rd and value ˆvi := ˆΘvi

∈ Rmv. Parameter matrices Θ, ¯Θ, and ˆΘ are learnt. The query of the receiver vi is compared to the key value of senders using a dot product. The resulting values wiare used as weights in the weighted sum of all the value vectors in the graph. The computation proceeds as follows:

wi := softmax

[k1,...,kn]>qi d



v0i := [ˆv1, . . . , ˆvn]wi , ∀vi∈ V , (1)

with [x1, x2, ..., xn] being a Rk×nmatrix of concatenated vectors xi∈ Rk. Often, multiple attention heads, i.e., Θ, ¯Θ, and ˆΘ matrices, are used to learn different interactions between the nodes and mitigate the consequences of unlucky initialisation. The output of multiple heads is concatenated and later projected to respect the dimensions.

A transformer block is a combination of an attention block and a feedforward layer with a possible normalisation between them. In addition, there are residual connections from the input to the atten-tion output and from the output of the attenatten-tion to the feedforward layer output. Transformer blocks can be stacked together to take higher order dependencies into account, i.e., reacting not only to the features of the nodes, but how the features of the nodes change after applying a transformer block.

3

T

HE

R

OLE OF

M

ORPHOLOGY IN

E

XISTING

W

ORK

In this section, we provide evidence against the assumption thatGNNs improve performance by exploiting information about physical morphology (Huang et al., 2020; Wang et al., 2018). Here and in all of the following sections, we run experiments for three random seeds and report the average undiscountedMTRLreturn and the standard error across the seeds.

To determine if information about the agent’s morphology encoded in the relational graph structure is essential to the success ofSMP, we compare its performance given full information about the structure (morphology), given no information about the structure (star), and given a structural bias unrelated to the agent’s morphology (line). Ideally, we would test a fully connected architecture as well, butSMPonly works with trees. Figure 9 in Appendix B illustrates the tested topologies. The results in Figure 1a and 1b demonstrate that, surprisingly, performance is not contingent on having information about the physical morphology. A star agent performs on par with the

(6)

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000 5000

Average return across environments

smp, morphology smp, star smp, line

(a) SMP, Walker++

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000 5000

Average return across environments

smp, morphology smp, star smp, line

(b) SMP, Humanoid++

0.0 0.2 0.4 0.6 0.8 1.0 Environment steps 1e7

0 500 1000 1500 2000 2500 Episode return nervenet, morphology nervenet, line nervenet, star nervenet, fully connected

(c) NERVENET, Walkers Figure 1: NeitherSMPnorNERVENETleverage the agent’s morphological information, or the posi-tive effects are outweighted by their negaposi-tive effect on message passing.

morphologyagent, thus refuting the assumption that the method learns because it exploits infor-mation about the agent’s physical morphology. The line agent performs worse, perhaps because the network must propagate messages even further away, and information is lost with each hop due to the finite size of the MLPs causing information bottlenecks (Alon & Yahav, 2020).

We also present similar results forNERVENET. Figure 1c shows that all of the variants we tried perform similarly well on Walkers from (Wang et al., 2018), with star being marginally better. SinceNERVENETcan process non-tree graphs, we also tested a fully connected variant. This ver-sion learns more slowly at the beginning, probably because of difficulties with differentiating nodes at the aggregation step. Interestingly, in contrast to SMP, in NERVENET line performs on par with morphology. This might be symptomatic of problems with the message-passing mechanism ofSMP, e.g., bottlenecks leading to information loss.

4

A

MORPHEUS

Inspired by the results above, we proposeAMORPHEUS, a transformer-based method for incompati-bleMTRLin continuous control. AMORPHEUSis motivated by the hypothesis that any benefitGNNs can extract from the morphological domain knowledge encoded in the graph is outweighed by the difficulty that the graph creates for message passing. In a sparse graph, crucial state information must be communicated across multiple hops, which we hypothesise is difficult to learn in practice.

AMORPHEUSbelongs to the encode-process-decode family of architectures (Battaglia et al., 2018) with a transformer at its core. Since transformers can be seen asGNNs operating on fully connected graphs, this approach allows us to learn a message passing schema for each state and each pass sep-arately, and limits the number of message passes needed to propagate sufficient information through the graph. Multi-hop message propagation in the presence of aggregation, which could cause prob-lems with gradient propagation and information loss, is no longer required. We implement both actor and critic in theSMPcodebase (Huang et al., 2020) and made our implementation available online at https://github.com/yobibyte/amorpheus. Like inSMP, there is no weight

limb 1limb 1 limb 1limb 1

transformer encoder limb 3 limb 2 limb 1 torso decoder

Figure 2: AMORPHEUS architecture. Lines with squares at the end denote concatenation. Arrows going separately through encoder and decoder denote that rows of the input matrix are processed independently as batch elements. Dashed arrows denote message-passing in a transformer block. The diagram depicts the policy network, the critic has an identical architecture, with the decoder outputs interpreted as value function values.

(7)

sharing between the actor and the critic. Both of them consist of three parts: a linear encoder, a transformer in the middle, and the output decoderMLP.

Figure 2 illustrates theAMORPHEUSarchitecture (policy). The encoder and decoder process each node independently, as if they are different elements of a mini-batch. LikeSMP, the policy network has one output per graph node. The critic has the same architecture as on Figure 2, and, as in Huang et al. (2020), each critic node outputs a scalar with the value loss independently computed per node. Similarly toNERVENETandSMP,AMORPHEUSis modular and can be used in incompatible environ-ments, including those not seen in training. In contrast toSMPwhich is constrained by the maximum number of children per node seen at the model initialisation in training,AMORPHEUScan be applied to any other morphology with no constraints on the physical connectivity.

Instead of one-hot encoding used in natural language processing, we apply a linear layer on node observations. Each node observation uses the same state representation asSMPand includes a limb type (e.g. hip or shoulder), position with a relative x coordinate of the limb with respect to the torso, positional and rotational velocities, rotations, angle and possible range of the values for the angle normalised to [0, 1]. We add residual connections from the input features to the decoder output to avoid the nodes forgetting their own features by the time the decoder independently computes the actions. Both actor and critic use two attention heads for each of the three transformer layers. Layer Normalisation (Ba et al., 2016) is a crucial component of transformers which we also use in AMORPHEUS. See Appendix A for more details on the implementation.

4.1 EXPERIMENTALRESULTS

We first testAMORPHEUSon the set ofMTRLenvironments proposed by Huang et al. (2020). For Walker++, we omit flipped environments, since Huang et al. (2020) implement flipping on the model level. ForAMORPHEUS, the flipped environments look identical to the original ones. Our experiments in this Section are built on top of theTD3implementation used in Huang et al. (2020). Figure 3 supports our hypothesis that explicit morphological information encoded in graph topology is not needed to yield a single policy achieving high average returns across a set of incompatible con-tinuous control environments. Free from the need to learn multi-hop communication and equipped with the attention mechanism,AMORPHEUSclearly outperformsSMP, the state-of-the-art algorithm for incompatible continuous control. Huang et al. (2020) report that trainingSMPon Cheetah++ together with other environments makesSMP unstable. By contrast, AMORPHEUShas no trouble learning in this regime (Figure 3g and 3h).

Our experiments demonstrate that node features have enough information forAMORPHEUSto per-form the task and limb discrimination needed for successfulMTRLcontinuous control policies. For example, a model can distinguish left from right, not from structural biases as inSMP, but from the relative position of the limb w.r.t. the root node provided in the node features.

While the total number of tasks in theSMPbenchmarks is high, they all share one key characteristic. All tasks in a benchmark are built using subsets of the limbs from an archetype (e.g., Walker++ or Cheetah++). To verify that our results hold more broadly, we adapted the Walkers benchmark (Wang et al., 2018) and comparedAMORPHEUSwithSMP andNERVENET on it. This benchmark includes five agents with different morphologies: a Hopper, a HalfCheetah, a FullCheetah, a Walker, and an Ostrich. The results in Figure 4 are consistent1with our previous experiments, demonstrating the benefits ofAMORPHEUS’ fully-connected graph with attentional aggregation.

1Note that the performance of N

ERVENETis not directly comparable, as the observational features and the learning algorithm differ from AMORPHEUSand SMP. We do not test NERVENETon SMP benchmarks because the codebases are not compatible and comparing NERVENETand SMP is not the focus of the paper. Even if we implemented NERVENETin the SMP training loop, it is unclear how the critic of NERVENETwould perform in a new setting. The original paper considers two options for the critic: one GNN-based and one MLP-based. We use the latter in Figure 4 as the former takes only the root node output labels as an input and is thus most likely to face difficulty in learning multi-hop message-passing. The MLP critic should perform better because training an MLP is easier, though it might be sample-inefficient when the number of tasks is large. For example, in Cheetah++ an agent would need to learn 12 different critics. Finally, NERVENETlearns a separate MLP encoder per task, partially defeating the purpose of using GNN for incompatible environments.

(8)

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000 5000 6000 7000 8000

Average return across environments

amorpheus smp

(a) Walker++

0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

1000 1500 2000 2500 3000 3500 4000

Average return across environments

amorpheus smp

(b) Cheetah++

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000 5000 6000

Average return across environments

amorpheus smp

(c) Humanoid++

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000 5000

Average return across environments

amorpheus smp

(d) Hopper++

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000

Average return across environments

amorpheus smp

(e)Walker-Humanoid++

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000

Average return across environments

amorpheus smp

(f)Walker-Humanoid-Hopper++

0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 500 1000 1500 2000 2500 3000

Average return across environments

amorpheus smp

(g) Cheetah-Walker-Humanoid++

0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 500 1000 1500 2000 2500

Average return across environments

amorpheus smp

(h)Cheetah-Walker-Humanoid-Hopper++

Figure 3: AMORPHEUS consistently outperforms SMP on MTRL benchmarks from Huang et al. (2020), supporting our hypothesis that no explicit structural information is needed to learn a suc-cessful MTRL policy and that facilitated message-passing procedure results in faster learning.

While we focused onMTRLin this work, we also evaluatedAMORPHEUSin a zero-shot generalisa-tion setting. Table 3 in Appendix D provides initial results demonstratingAMORPHEUS’s potential.

4.2 ATTENTIONMASKANALYSIS

GNN-based policies, especially those that use attention, are more interpretable than monolithicMLP policies. We now analyse the attention masks thatAMORPHEUSlearns. Having an implicit structure that is state dependent is one of the benefits ofAMORPHEUS(every node has access to other nodes’ annotations, and the aggregation weights depend on the input as shown in Equation 1). By contrast, NERVENETandSMPhave a rigid message-passing structure that does not change throughout training or throughout a rollout. Indeed, Figure 5 shows a variety of masks a Walker++ model exhibits within a Walker-7 rollout, confirming thatAMORPHEUSattends to different parts of the state space based on the input.

Both Wang et al. (2018) and Huang et al. (2020) notice periodic patterns arising in their models. Smilarly,AMORPHEUSdemonstrates cycles in attention masks, usually arising for the first layer of the transformer. Figure 6 shows the column-wise sum of the attention masks coordinated with an upper-leg limb of a Walker-7 agent. Intuitively, the column-wise sum shows how much other nodes are interested in the node corresponding to that column.

Interestingly, attention masks in earlier layers change more slowly within a rollout than those of the downstream layers. Figure 13 in Appendix C.2 demonstrates this phenomenon for three different

(9)

0.0 0.2 0.4 0.6 0.8 1.0

Total environment steps 1e7

0 1000 2000 3000 4000 5000 6000

Average return across environments

nervenet amorpheus smp

Figure 4:MTRLperformance on Walkers(Wang et al., 2018).

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3 torso left1 left2 left3 right1 right2 right3

0.0 0.2 0.4 0.6 0.8 1.0

Figure 5: State-dependent masks ofAMORPHEUS(3rdattention layer) within a Walker-7 rollout.

0 200 400 600 800 1000 t

0 1 2 3

column-wise mask sum

mask 0.0 0.1 0.2 0.3 normalised angle angle

Figure 6: In the first attention layer of a Walker-7 rollout, nodes attend to an upper leg (column-wise mask sum ∼ 3) when the leg is closer to the ground (normalized angle ∼ 0).

Walker++models tested on Walker-7. This shows thatAMORPHEUSmight, in principle, learn a rigid structure (as inGNNs) if needed.

Finally, we investigate how attention masks evolve over time. Early in training, the masks are spread across the whole graph. Later on, the mask weights distributions become less uniform. Figures 10, 11 and 12 in Appendix C.1 demonstrate this phenomenon on Walker-7.

5

R

ELATED

W

ORK

MostMTRLresearch considers the compatible case (Rusu et al., 2016; Parisotto et al., 2016; Teh et al., 2017; Vithayathil Varghese & Mahmoud, 2020). MTRLfor continuous control is often done from pixels with CNNs solving part of the compatibility issue. DMLab (Beattie et al., 2016) is a popular choice when learning from pixels with a compatible action space shared across the environ-ments (Hessel et al., 2019; Song et al., 2020).

GNNs started to stretch the possibilities of RL allowingMTRLin incompatible environments. Khalil et al. (2017) learn combinatorial optimisation algorithms over graphs. Kurin et al. (2020) learn a branching heuristic of a SAT solver. Applying approximations schemes typically used inRLto these settings is impossible, because they expect input and output to be of fixed size. Another form of (potentially incompatible)RLusing message passing are coordination graphs (e.g. DCG, Boehmer et al., 2020), that use the max-plus algorithm (Pearl, 1989) to coordinate action selection between multiple agents. One can apply DCG in single-agentRLusing ideas of Tavakoli et al. (2021). Several methods for incompatible continuous control have also been proposed. Chen et al. (2018) pad the state vector with zeros to have the same dimensionality for robots with different number of joints, and condition the policy on the hardware information of the agent. D’Eramo et al. (2020) demonstrate a positive effect of learning a common network for multiple tasks, learning a specific encoder and a decoder one per task. We expect this method to suffer from sample-inefficiency be-cause it has to learn separate input and output heads per each task. Moreover, Wang et al. (2018) have a similar implementation of theirMTRLbaseline showing thatGNNs have benefits over MLPs for incompatible control. Huang et al. (2020), whose work is the main baseline in this paper, ap-ply aGNN-like approach and study itsMTRL and generalisation properties. The method can be used only with trees, its aggregation function is not permutation invariant, and the message-passing schema stays fixed throughout the training procedure. Wang et al. (2018) and Huang et al. (2020) attribute the effectiveness of their methods to the ability of theGNNs to exploit information about agent morphology. In this work, we present evidence against this hypothesis, showing that existing approaches do not exploit morphological information as was previously believed.

(10)

Attention mechanisms have also been used in theRLsetting. Zambaldi et al. (2018) consider self-attention to deal with an object-oriented state space. They further generalize this to variable action spaces and test generalisation on Starcraft-II mini-games that have a varying number of units and other environmental entities. Duan et al. (2017) apply attention for both temporal dependency and a factorised state space (different objects in the scene) keeping the action space compatible. Parisotto et al. (2020) use transformers as a replacement for a recurrent policy. Loynd et al. (2020) use transformers to add history dependence in a POMDP as well as for factored observations, having a node per game object. The authors do not consider a factored action space, with the policy receiving the aggregated information of the graph after the message passing ends. Baker et al. (2020) use self-attention to account for a factored state-space to attend over objects or other agents in the scene. AMORPHEUSdoes not use a transformer for recurrency but for the factored state and action spaces, with each non-torso node having an action output. Iqbal & Sha (2019) apply attention to generalise MTRLmulti-agent policies over varying environmental objects and Iqbal et al. (2020) extend this to a factored action space by summarising the values of all agents with a mixing network (Rashid et al., 2020). Li et al. (2020) learn embeddings for a multi-agent actor-critic architecture by generating the weights of a graph convolutional network (GCN, Kipf & Welling, 2017) with attention. This allows a different topology in every state, similar toAMORPHEUS, which goes one step further and allows to change the topology in every round of message passing.

Another line of work aims to infer graph topology instead of hardcoding one. Differentiable Graph Module (Kazi et al., 2020) predicts edge probabilities doing a continuous relaxation of k-nearest neighbours to differentiate the output with respect to the edges in the graph. Johnson et al. (2020) learn to augment a given graph with additional edges to improve the performance of a downstream task. Kipf et al. (2018) use variational autoencoders (Kingma & Welling, 2014) using aGNN for reconstruction. Notably, the authors notice that message passing on a fully connected graph might work better than when restricted by skeleton when evaluated on human motion capture data.

6

C

ONCLUSIONS AND

F

UTURE

W

ORK

In this paper, we investigated the role of explicit morphological information in graph-based con-tinous control. We ablated existing methodsSMPandNERVENET, providing evidence against the belief that these methods improve performance by exploiting explicit morphological structure en-coded in graph edges. Motivated by our findings, we presentedAMORPHEUS, a transformer-based method forMTRLin incompatible environments.AMORPHEUSobviates the need to propagate mes-sages far away in the graph and can attend to different regions of the observations depending on the input and the particular point in training. As a result,AMORPHEUSclearly outperforms existing work in incompatible continuous control. In addition,AMORPHEUSexhibits non-trivial behaviour such as periodic cycles of attention masks coordinated with the gait. The results show that information in the node features alone is enough to learn a successfulMTRLpolicy. We believe our results further push the boundaries of incompatibleMTRLand provide valuable insights for further progress. One possible drawback ofAMORPHEUSis its computational complexity. Transformers suffer from quadratic complexity in the number of nodes with a growing body of work addressing this issue (Tay et al., 2020). However, the number of the nodes in continuous control problems is relatively low compared to much longer sequences used in NLP (Devlin et al., 2019). Moreover, Transformers are higly parallelisable, compared toSMP with the data dependency across tree levels (the tree is processed level by level with each level taking the output of the previous level as an input).

We focused on investigating the effect of injecting explicit morphological information into the model. However, there are also opportunities to improve the learning algorithm itself. Potential directions of improvement include averaging gradients instead of performing sequential task up-dates, or balancing tasks updates with multi-armed bandits or PopArt (Hessel et al., 2019).

ACKNOWLEDGMENTS

VK is a doctoral student at the University of Oxford funded by Samsung R&D Institute UK through the AIMS program. SW has received funding from the European Research Council under the Euro-pean Union’s Horizon 2020 research and innovation programme (grant agreement number 637713). The experiments were made possible by a generous equipment grant from NVIDIA. The authors would like to thank Henry Kenlay and Marc Brockschmidt for useful discussions onGNNs.

(11)

R

EFERENCES

Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. CoRR, abs/2006.05205, 2020. URL https://arxiv.org/abs/2006.05205.

Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. URL http://arxiv.org/abs/1607.06450.

Bowen Baker, Ingmar Kanitscheider, Todor M. Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent tool use from multi-agent autocurricula. In 8th International Con-ference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=SkxpxJBKwS.

Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vin´ıcius Flores Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, C¸ aglar G¨ulc¸ehre, H. Francis Song, Andrew J. Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey R. Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matthew Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks. CoRR, abs/1806.01261, 2018. URL http://arxiv.org/abs/1806.01261.

Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, Julian Schrittwieser, Keith Ander-son, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016. URL http://arxiv.org/abs/1612.03801.

Wendelin Boehmer, Vitaly Kurin, and Shimon Whiteson. Deep coordination graphs. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 980–991. PMLR, 2020. URL http://proceedings.mlr.press/v119/boehmer20a.html.

Tao Chen, Adithyavairavan Murali, and Abhinav Gupta. Hardware conditioned policies for multi-robot transfer learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicol`o Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neu-ral Information Processing Systems 31: Annual Conference on Neural Information Pro-cessing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montr´eal, Canada, pp. 9355–9366, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/ b8cfbf77a3d250a4523ba67a65a7d031-Abstract.html.

Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Hol-ger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Alessandro Moschitti, Bo Pang, and Walter Daelemans (eds.), Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro-cessing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1724–1734. ACL, 2014. doi: 10.3115/v1/d14-1179. URL https://doi.org/10.3115/v1/d14-1179.

Carlo D’Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, and Jan Peters. Sharing knowl-edge in multi-task deep reinforcement learning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=rkgpv2VFvr.

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 4171– 4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423.

(12)

Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 1087–1098, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ ba3866600c3540f67c1e9575e213be0a-Abstract.html.

Scott Fujimoto, Herke van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In Jennifer G. Dy and Andreas Krause (eds.), Proceedings of the 35th Inter-national Conference on Machine Learning, ICML 2018, Stockholmsm¨assan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 1582–1591. PMLR, 2018. URL http://proceedings.mlr.press/v80/fujimoto18a.html. Yoav Goldberg. Assessing bert’s syntactic abilities. CoRR, abs/1901.05287, 2019. URL http:

//arxiv.org/abs/1901.05287.

Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2, pp. 729–734. IEEE, 2005.

Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, and Hado van Hasselt. Multi-task deep reinforcement learning with popart. In The Thirty-Third AAAI Con-ference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Ar-tificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Ad-vances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - Febru-ary 1, 2019, pp. 3796–3803. AAAI Press, 2019. doi: 10.1609/aaai.v33i01.33013796. URL https://doi.org/10.1609/aaai.v33i01.33013796.

Wenlong Huang, Igor Mordatch, and Deepak Pathak. One policy to control them all: Shared mod-ular policies for agent-agnostic control. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 4455–4464. PMLR, 2020. URL http://proceedings. mlr.press/v119/huang20d.html.

Shariq Iqbal and Fei Sha. Actor-attention-critic for multi-agent reinforcement learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 2961–2970. PMLR, 2019. URL http:// proceedings.mlr.press/v97/iqbal19a.html.

Shariq Iqbal, Christian A. Schr¨oder de Witt, Bei Peng, Wendelin B¨ohmer, Shimon Whiteson, and Fei Sha. AI-QMIX: attention and imagination for dynamic multi-agent reinforcement learning. CoRR, abs/2006.04222, 2020. URL https://arxiv.org/abs/2006.04222.

Daniel D. Johnson, Hugo Larochelle, and Daniel Tarlow. Learning graph structure with A finite-state automaton layer. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: An-nual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 1fdc0ee9d95c71d73df82ac8f0721459-Abstract.html.

Anees Kazi, Luca Cosmo, Nassir Navab, and Michael M. Bronstein. Differentiable graph module (DGM) graph convolutional networks. CoRR, abs/2002.04999, 2020. URL https://arxiv. org/abs/2002.04999.

Elias B. Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning com-binatorial optimization algorithms over graphs. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neu-ral Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 6348–6358, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ d9896106ca98d3d05b8cbdf4fd8b13a1-Abstract.html.

(13)

Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. URL http://arxiv.org/ abs/1312.6114.

Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional net-works. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https: //openreview.net/forum?id=SJU4ayYgl.

Thomas N. Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard S. Zemel. Neural re-lational inference for interacting systems. In Jennifer G. Dy and Andreas Krause (eds.), Proceed-ings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm¨assan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Re-search, pp. 2693–2702. PMLR, 2018. URL http://proceedings.mlr.press/v80/ kipf18a.html.

Vitaly Kurin, Saad Godil, Shimon Whiteson, and Bryan Catanzaro. Can q-learning with graph networks learn a generalizable branching heuristic for a SAT solver? In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020,

virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/

6d70cb65d15211726dcce4c0e971e21c-Abstract.html.

Sheng Li, Jayesh K. Gupta, Peter Morales, Ross E. Allen, and Mykel J. Kochenderfer. Deep implicit coordination graphs for multi-agent reinforcement learning. CoRR, abs/2006.11438, 2020. URL https://arxiv.org/abs/2006.11438.

Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http: //arxiv.org/abs/1509.02971.

Ricky Loynd, Roland Fernandez, Asli C¸ elikyilmaz, Adith Swaminathan, and Matthew J. Hausknecht. Working memory graphs. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 6404–6414. PMLR, 2020. URL http://proceedings. mlr.press/v119/loynd20a.html.

Emilio Parisotto, Lei Jimmy Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1511.06342.

Emilio Parisotto, H. Francis Song, Jack W. Rae, Razvan Pascanu, C¸ aglar G¨ulc¸ehre, Siddhant M. Jayakumar, Max Jaderberg, Rapha¨el Lopez Kaufman, Aidan Clark, Seb Noury, Matthew Botvinick, Nicolas Heess, and Raia Hadsell. Stabilizing transformers for reinforcement learning. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 7487– 7498. PMLR, 2020. URL http://proceedings.mlr.press/v119/parisotto20a. html.

Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.

Judea Pearl. Probabilistic reasoning in intelligent systems - networks of plausible inference. Morgan Kaufmann series in representation and reasoning. Morgan Kaufmann, 1989.

(14)

Jan Peters and Stefan Schaal. Policy gradient methods for robotics. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2006, October 9-15, 2006, Beijing, China, pp. 2219–2225. IEEE, 2006. doi: 10.1109/IROS.2006.282564. URL https://doi.org/ 10.1109/IROS.2006.282564.

Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. Dissecting contextual word embeddings: Architecture and representation. In Ellen Riloff, David Chiang, Julia Hock-enmaier, and Jun’ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Meth-ods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pp. 1499–1509. Association for Computational Linguistics, 2018. doi: 10.18653/v1/d18-1179. URL https://doi.org/10.18653/v1/d18-1179.

Tabish Rashid, Mikayel Samvelyan, Christian Schr¨oder de Witt, Gregory Farquhar, Jakob N. Fo-erster, and Shimon Whiteson. Monotonic value function factorisation for deep multi-agent rein-forcement learning. J. Mach. Learn. Res., 21:178:1–178:51, 2020. URL http://jmlr.org/ papers/v21/20-081.html.

Andrei A. Rusu, Sergio Gomez Colmenarejo, C¸ aglar G¨ulc¸ehre, Guillaume Desjardins, James Kirk-patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy dis-tillation. In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceed-ings, 2016. URL http://arxiv.org/abs/1511.06295.

Franco Scarselli, Sweah Liang Yong, Marco Gori, Markus Hagenbuchner, Ah Chung Tsoi, and Marco Maggini. Graph neural networks for ranking web pages. In Andrzej Skowron, Rakesh Agrawal, Michael Luck, Takahira Yamaguchi, Pierre Morizet-Mahoudeaux, Jiming Liu, and Ning Zhong (eds.), 2005 IEEE / WIC / ACM International Conference on Web Intelligence (WI 2005), 19-22 September 2005, Compiegne, France, pp. 666–672. IEEE Computer Society, 2005. doi: 10.1109/WI.2005.67. URL https://doi.org/10.1109/WI.2005.67.

John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/abs/ 1707.06347.

Sequence-to-Sequence Modeling, Pytorch Tutorial. Sequence-to-sequence modeling with nn.transformer and torchtext. URL https://pytorch.org/tutorials/beginner/ transformer_tutorial.html. [Online; accessed 8-August-2020].

H. Francis Song, Abbas Abdolmaleki, Jost Tobias Springenberg, Aidan Clark, Hubert Soyer, Jack W. Rae, Seb Noury, Arun Ahuja, Siqi Liu, Dhruva Tirumala, Nicolas Heess, Dan Belov, Martin A. Riedmiller, and Matthew M. Botvinick. V-MPO: on-policy maximum a posteriori policy opti-mization for discrete and continuous control. In 8th International Conference on Learning Repre-sentations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=SylOlp4FvH.

Arash Tavakoli, Mehdi Fatemi, and Petar Kormushev. Learning to represent action values as a hypergraph on the action vertices. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=Xv_s64FiXTv.

Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. Efficient transformers: A survey. CoRR, abs/2009.06732, 2020. URL https://arxiv.org/abs/2009.06732.

Yee Whye Teh, Victor Bapst, Wojciech M. Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust multitask reinforcement learning. In Is-abelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vish-wanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 4496–4506, 2017. URL https://proceedings.neurips.cc/ paper/2017/hash/0abdc563a06105aee3c6136871c9f4d1-Abstract.html.

Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. What do you

(15)

learn from context? probing for sentence structure in contextualized word representations. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id= SJzSgnRcKX.

Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based con-trol. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2012, Vilamoura, Algarve, Portugal, October 7-12, 2012, pp. 5026–5033. IEEE, 2012. doi: 10.1109/ IROS.2012.6386109. URL https://doi.org/10.1109/IROS.2012.6386109.

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 5998–6008, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.

Jesse Vig and Yonatan Belinkov. Analyzing the structure of attention in a transformer language model. CoRR, abs/1906.04284, 2019. URL http://arxiv.org/abs/1906.04284.

Nelson Vithayathil Varghese and Qusay H. Mahmoud. A survey of multi-task deep reinforce-ment learning. Electronics, 9(9):1363–1384, Aug 2020. ISSN 2079-9292. doi: 10.3390/ electronics9091363. URL http://dx.doi.org/10.3390/electronics9091363.

Tingwu Wang, Renjie Liao, Jimmy Ba, and Sanja Fidler. Nervenet: Learning structured policy with graph neural networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=S1sqHMZCb.

Vin´ıcius Flores Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David P. Reichert, Timothy P. Lillicrap, Edward Lockhart, Murray Shanahan, Victoria Langston, Razvan Pascanu, Matthew Botvinick, Oriol Vinyals, and Peter W. Battaglia. Relational deep reinforcement learning. CoRR, abs/1806.01830, 2018. URL http://arxiv.org/abs/ 1806.01830.

(16)

A

R

EPRODUCIBILITY

We initially took the transformer implementation from the Official Pytorch Tutorial (Sequence-to-Sequence Modeling, Pytorch Tutorial) which uses TransformerEncoderLayer from Py-torch (Paszke et al., 2017). We modified it for the regression task instead of classification, and removed masking and the positional encoding. Table 1 provides all the hyperparameters needed to replicate our experiments.

Table 1: Hyperparameters of our experiments

Hyperparameter Value Comment

AMORPHEUS

– Learning rate 0.0001

– Gradient clipping 0.1

– Normalisation LayerNorm As an argument to TransformerEncoder in torch.nn

– Attention layers 3

– Attention heads 2

– Attention hidden size 256 – Encoder output size 128 Training

– runs 3 per benchmark

AMORPHEUSmakes use of gradient clipping and a smaller learning rate. We found, thatSMPalso performs better with the decreased learning rate (0.0001) as well and we use it throughout the work. Figure 7 demonstrates the effect of a smaller learning rate on Walker++. All otherSMP hyperpa-rameters are as reported in the original paper with the two-directional message passing.

0.0 0.2 0.4 0.6 0.8 1.0

Total environment steps 1e7 0 1000 2000 3000 4000 5000

Average return across environments

smp, smaller lr smp, vanilla

Figure 7: Smaller learning rate make SMP to yield better results on Walker++.

0.0 0.2 0.4 0.6 0.8 1.0

Environment steps 1e7 0 500 1000 1500 2000 2500 Episode return

nervenet, no return limits nervenet, original Walkers

Figure 8: Removing the return limit slightly de-teriorates the performance of NerveNet on Walk-ers.

Wang et al. (2018) add an artificial return limit of 3800 for their Walkers environment. We remove this limit and compare the methods without it. For NerveNet, we plot the results with the option best for it. Figure 8 compares the two options.

(17)

Table 2: Full list of environments used in this work.

Environment Training Zero-shot testing

Walker++

walker 2 main walker 3 main

walker 4 main walker 6 main

walker 5 main walker 7 main humanoid++

humanoid 2d 7 left arm humanoid 2d 7 left leg

humanoid 2d 7 lower arms humanoid 2d 8 right knee

humanoid 2d 7 right arm humanoid 2d 7 right leg humanoid 2d 8 left knee humanoid 2d 9 full Cheetah++

cheetah 2 back cheetah 3 balanced

cheetah 2 front cheetah 5 back

cheetah 3 back cheetah 6 front

cheetah 3 front cheetah 4 allback cheetah 4 allfront cheetah 4 back cheetah 4 front cheetah 5 balanced cheetah 5 front cheetah 6 back cheetah 7 full Cheetah-Walker--Humanoid

All in the column above All in the column above Hopper++ hopper 3 hopper 4 hopper 5 Cheetah-Walker--Humanoid-Hopper

All in the column above All in the column above Walkersfrom Wang et al. (2018) Ostrich HalfCheetah FullCheetah Hopper HalfHumanoid

(18)

B

M

ORPHOLOGY ABLATIONS

Figure 9 shows examples of graph topologies we used in structure ablation experiments.

lower arm lower arm

upper arm upper arm

torso

shin shin

thigh thigh

(a) Morphology

torso

lower arm shin

thigh shin

lower arm upper arm upper arm

thigh (b) Star torso shin thigh upper arm thigh shin lower arm lower arm upper arm (c) Line Figure 9: Examples of graph topologies used in the structure ablation experiments.

(19)

C

A

TTENTION

M

ASK

A

NALYSIS

C.1 EVOLUTION OF MASKS THROUGHOUT THE TRAINING PROCESS

Figures 10, 11 and 12 demonstrate the evolution ofAMORPHEUSattention masks during training.

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

Figure 10: Walker++ masks for the 3 attention layers on Walker-7 at the beginning of training.

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

Figure 11: Walker++ masks for the 3 attention layers on Walker-7 after 2.5 mil frames.

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

torso left1 left2 left3 right1 right2 right3

(20)

C.2 ATTENTION MASKS CUMULATIVE CHANGE 0 200 400 600 800 1000 t 0 500 1000 1500 2000 2500 3000 cumulative change layer 0 layer 1 layer 2 0 200 400 600 800 1000 t 0 500 1000 1500 2000 2500 cumulative change layer 0 layer 1 layer 2 0 200 400 600 800 1000 t 0 500 1000 1500 2000 2500 cumulative change layer 0 layer 1 layer 2

Figure 13: Absolutive cumulative change in the attention masks for three different models on Walker-7.

D

G

ENERALISATION RESULTS

Table 3: Initial results on generalisation. The numbers show the average performance of three seeds evaluated on 100 rollouts and standard error of the mean. While the average values are higher for AMORPHEUS on 5 out of 7 benchmarks, high variance of both methods might be indicative of instabilities in generalisation behaviour due to large differences between the training and testing tasks. AMORPHEUS SMP walker-3-main 666.24 (133.66) 175.65 (157.38) walker-6-main 1171.35 (832.91) 729.26 (135.60) humanoid-2d-7-left-leg 2821.22 (1340.29) 2158.29 (785.33) humanoid-2d-8-right-knee 2717.21 (624.80 ) 327.93 (125.75) cheetah-3-balanced 474.82 (74.05) 156.16 (33.00) cheetah-5-back 3417.72 (306.84) 3820.77 (301.95) cheetah-6-front 5081.71 (391.08) 6019.07 (506.55)

E

R

ESIDUAL

C

ONNECTION

A

BLATION

We use the residual connection in AMORPHEUS as a safety mechanim to prevent nodes from for-getting their own observations. To check thatAMORPHEUS’s improvements do not come from the residual connection alone, we performed the ablation. As one can see on Figure 14, we cannot at-tribute the success of our method to this improvement alone. High variance on Humanoid++ is related to the fact that one seed started to improve much later, and the average performance suffered as the result.

(21)

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000 5000 6000 7000 8000

Average return across environments

amorpheus amorpheus, no conditioning smp

(a) Walker++

0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

1000 1500 2000 2500 3000 3500 4000

Average return across environments

amorpheus amorpheus, no conditioning smp

(b) Cheetah++

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000 5000 6000

Average return across environments

amorpheus amorpheus, no conditioning smp

(c) Humanoid++

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000 5000

Average return across environments

amorpheus amorpheus, no conditioning smp

(d) Hopper++

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000

Average return across environments

amorpheus amorpheus, no conditioning smp

(e)Walker-Humanoid++

0.0 0.2 0.4 0.6 0.8 1.0 Total environment steps 1e7

0 1000 2000 3000 4000

Average return across environments

amorpheus amorpheus, no conditioning smp

(f)Walker-Humanoid-Hopper++

0.2 0.4 0.6 0.8 1.0

Total environment steps 1e7

0

500

1000

1500

2000

2500

3000

Average return across environments

amorpheus

amorpheus, no conditioning smp

(g) Cheetah-Walker-Humanoid++

0.2 0.4 0.6 0.8 1.0

Total environment steps 1e7

0

500

1000

1500

2000

2500

Average return across environments

amorpheus

amorpheus, no conditioning smp

(h)Cheetah-Walker-Humanoid-Hopper++ Figure 14: Residual connection ablation experiment.

Cytaty

Powiązane dokumenty

Zasadnicze poglądy Niemcewicza są następujące: 1) kara śmierci powinna być wydatnie ograniczona, ponieważ jest sprzeczną z ideą poprawy więźnia; 2) zakład karny powołany jest

The objective lens focuses a linearly polarized laser beam input (red), and traps a birefringent cylinder (gray) near the focus within a flow cell. Manipulation of the

Rolling Horizon Similar to alternative approaches, the rolling horizon principle is easily included in the MPC for Markov decision processes by observing at each decision step the

It is worth noting that the Code introduce single-mandate constituencies in the elections to the Senate which in general failed to eliminate the dependency of that chamber

• tasks for independent work of students – individual resources with tasks for independent execution, which contain the main structural elements: the subject, the purpose,

Wiel- kość deputatu zależała od możliwości parafii i częściowo od potrzeb organisty, nie ulega jednak wątpliwości, że mimo zmian jednostki miary przez blisko

Polak, Problemy Spiszą i Orawy w świetle publicystyki polskiego Podhala (1918-1924) do uznania granic, Kraków 1954, s.. nograficzne, lecz także gospodarcze, oświatowe i inne.

From the shape of the curve for the decomposition rate against partial pressure it can be assumed that the adsorption-desorption equili- brium of the alcohol is not