Relative Overgeneralisation in Cooperative Multi-Agent Reinforcement Learning - Dr. Bei Peng
Event details
Description
Dr. Bei Peng, is Lecturer (Assistant Professor) in Artificial Intelligence in the Department of Computer Science at the University of Liverpool. She is the author of numerous important works in Reinforcement Learning.
Title: Relative Overgeneralisation in Cooperative Multi-Agent Reinforcement Learning
Abstract: In cooperative multi-agent tasks, independent learning agents often fail to converge to optimal joint solutions due to a pathology called "relative overgeneralisation". Relative overgeneralisation arises when an agent assigns its highest utility estimate to a sub-optimal action due to the random, exploratory behaviour of the other agents with which it is learning to collaborate. This can cause the agents to become stuck in local optima in the joint policy space, and fail to learn the most effective coordination strategies. We find that many existing deep multi-agent reinforcement learning (MARL) algorithms suffer from this pathology, resulting in them failing to solve cooperative tasks requiring close coordination between the agents. In this talk, I will discuss two of our recent works that aim to better address relative overgeneralisation in cooperative MARL.
Location
53.381110350712, -1.4799692802854
When focused, use the arrow keys to pain, and the + and - keys to zoom in/out.