Show simple item record

dc.contributor.authorFernández Gauna, Borja
dc.contributor.authorEtxeberria Agiriano, Ismael ORCID
dc.contributor.authorGraña Romay, Manuel María
dc.date.accessioned2016-04-11T13:18:11Z
dc.date.available2016-04-11T13:18:11Z
dc.date.issued2015-07-09
dc.identifier.citationPLOS ONE 10(7) : (2015) // Article ID e0127129es
dc.identifier.issn1932-6203
dc.identifier.urihttp://hdl.handle.net/10810/17878
dc.description.abstractMulti-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.es
dc.description.sponsorshipThis research has been partially funded by EU through SandS project, grant agreement no 317947. This research has been partially funded by grant TIN2011-23823 of the Ministerio de Ciencia e Innovacion of the Spanish Government (MINECO), with FEDER funds. The GIC has been supported by grant IT874-13 as university research group category AMG was supported by EC under FP7, Coordination and Support Action, Grant Agreement Number 316097, ENGINE European Research Centre of Network Intelligence for Innovation Enhancement. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Editoriales
dc.language.isoenges
dc.publisherPublic Library Sciencees
dc.relationinfo:eu-repo/grantAgreement/EC/FP7/317947es
dc.relationinfo:eu-repo/grantAgreement/MINECO/TIN2011-23823
dc.rightsinfo:eu-repo/semantics/openAccesses
dc.subjectsystem controles
dc.subjectreinforcementes
dc.subjectconstraintses
dc.subjectMDPSes
dc.titleLearning Multirobot Hose Transportation and Deployment by Distributed Round-Robin Q-Learninges
dc.typeinfo:eu-repo/semantics/articlees
dc.rights.holder© 2015 Fernandez-Gauna et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are creditedes
dc.relation.publisherversionhttp://journals.plos.org/plosone/article?id=10.1371/journal.pone.0127129#abstract0es
dc.identifier.doi10.1371/journal.pone.0127129
dc.departamentoesCiencia de la computación e inteligencia artificiales_ES
dc.departamentoesLenguajes y sistemas informáticoses_ES
dc.departamentoeuKonputazio zientziak eta adimen artifizialaes_ES
dc.departamentoeuHizkuntza eta sistema informatikoakes_ES
dc.subject.categoriaAGRICULTURAL AND BIOLOGICAL SCIENCES
dc.subject.categoriaMEDICINE
dc.subject.categoriaBIOCHEMISTRY AND MOLECULAR BIOLOGY


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record