• Nie Znaleziono Wyników

Exploring the Effects of Conditioning Independent Q-Learners on the Sufficient Statistic for Dec-POMDPs

N/A
N/A
Protected

Academic year: 2021

Share "Exploring the Effects of Conditioning Independent Q-Learners on the Sufficient Statistic for Dec-POMDPs"

Copied!
4
0
0

Pełen tekst

(1)

Delft University of Technology

Exploring the Effects of Conditioning Independent Q-Learners on the Sufficient Statistic

for Dec-POMDPs

Mandersloot, A.V.; Oliehoek, F.A.; Czechowski, A.T.

Publication date

2020

Document Version

Final published version

Published in

BNAIC/BeneLearn 2020

Citation (APA)

Mandersloot, A. V., Oliehoek, F. A., & Czechowski, A. T. (2020). Exploring the Effects of Conditioning

Independent Q-Learners on the Sufficient Statistic for Dec-POMDPs. In L. Cao, W. Kosters, & J. Lijffijt

(Eds.), BNAIC/BeneLearn 2020 (pp. 423-424). RU Leiden.

http://bnaic.liacs.leidenuniv.nl/wordpress/wp-content/uploads/papers/BNAICBENELEARN_2020_Final_paper_65.pdf

Important note

To cite this publication, please use the final published version (if applicable).

Please check the document version above.

Copyright

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy

Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.

This work is downloaded from Delft University of Technology.

(2)

Proceedings

November 19–20, 2020

(3)

Exploring the effects of conditioning

Independent Q-Learners on the Sufficient

Statistic for Dec-POMDPs

Alex Mandersloot[0000−0003−1617−2934], Frans Oliehoek1[0000−0003−4372−5055], and Aleksander Czechowski1[0000−0002−6054−9842]

1Department of Intelligent Systems, Delft University of Technology, Delft, The

Netherlands

Abstract. In this study, we investigate the effects of conditioning

Inde-pendent Q-Learners (IQL) not solely on the individual action-observation history, but additionally on the sufficient plan-time statistic for Decen-tralized Partially Observable Markov Decision Processes. In doing so, we attempt to address a key shortcoming of IQL, namely that it is likely to converge to a Nash Equilibrium that can be arbitrarily poor. We identify a novel exploration strategy for IQL when it conditions on the sufficient statistic, and furthermore show that sub-optimal equilibria can be es-caped consistently by sequencing the decision-making during learning. The practical limitation is the exponential complexity of both the suffi-cient statistic and the decision rules.

Keywords: Deep Reinforcement Learning · Multi-Agent · Partial

Ob-servability· Decentralized Execution.

Introduction: The Decentralized Partially Observable Markov Decision Process

(Dec-POMDP) is a widely used framework to formally model scenarios in which multiple agents must collaborate using private information. A key difficulty of a Dec-POMDP is that to coordinate successfully, an agent should decide on actions not only using its own action-observation history, but also by reasoning about the information that might be available to the other agents.

Independent Q-Learning (IQL) [1] is an easily-scalable multi-agent Reinforce-ment Learning method in which each agent concurrently learns the value of indi-vidual actions based on its indiindi-vidual information. It is well understood that such individual action-values are insufficient to capture the inter-agent dependency, and consequently IQL is not guaranteed to converge to the optimal joint policy. Instead, it is likely to converge to a joint policy that is in Nash Equilibrium [2]. However, such equilibria can be arbitrarily poor.

Precisely the obliviousness of IQL to the presence of other learning agents is our motivation for additionally conditioning IQL on the sufficient statistic for Dec-POMDPs [3], which contains a distribution over the joint action-observation history induced by the joint policy followed thus far. As a result, each agent is then equipped with an accurate belief over the local information available to the other agents, and is able to adjust its own behavior accordingly.

(4)

2 A. Mandersloot et al.

Experiments: We train a Deep Q-Network for each agent that conditions on

the individual action-observation historyθit and the sufficient statistic σt, and learns the value of individual actionsQit(θit, σt, ait). Methods are evaluated in the two agent Decentralized Tiger environment, whereby a horizon of 3 is employed. To escape poor equilibria, an exploratory action of one agent should be ob-servable to the others. To accomplish this, our agents explore in the space of entire decision rules. The sufficient statistic captures such decision rules, and thus facilitates the communication of exploratory actions among the agents. Im-portantly, however, the sufficient statistic summarizes only the history of joint decision rules. For current exploratory decision rules to be observable to others, we therefore additionally sequence the decision-making during learning. Specif-ically, agent 1 acts first and agentn is last to act. Each agent i then addition-ally conditions on the current (possibly exploratory) decision rulesδt1:i−1of the agents that acted before it to learnQit(θit, σt, δ1:i−1t , ait). Our learners are able to consistently escape sub-optimal equilibria and learn the optimal policy, even when we explicitly force such equilibria upon the agents initially (Fig. 1).

Fig. 1: All 50 learning curves.

Average Reward (std) 5.00 (0.77) Ratio Optimal Policies 0.92

Table 1: Results across the 50 runs.

This project had received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 758824 —INFLUENCE).

References

1. Tan, M.. Multi-agent reinforcement learning: Independent vs. cooperative agents. In: Proceedings of the tenth international conference on machine learning. 1993. p. 330-337.

2. Boutilier, C.. Sequential optimality and coordination in multiagent systems. In: IJCAI. 1999. p. 478-485.

3. Oliehoek, F. A.. Sufficient plan-time statistics for decentralized POMDPs. In: Twenty-Third International Joint Conference on Artificial Intelligence. 2013.

Cytaty

Powiązane dokumenty