Connections between POMDPs and partially observed n-player mean-field games
Bora Yongacoglu – University of Toronto, Canada
In this talk, we will study a discrete-time model of mean-field games with finitely many players and partial observability of the global state, and we will describe the deep connection between such n-player mean-field games and partially observed Markov decision problems (POMDPs). We focus primarily on settings with mean-field observability, where each player privately observes its own local state as well as the complete mean-field distribution. We prove that if one's counterparts use symmetric stationary memoryless policies, then a given agent faces a fully observed, time homogenous MDP. We leverage this to prove the existence of a memoryless, stationary perfect equilibrium in the n-player game with mean-field observability. We also show, by example, that the symmetry condition cannot be relaxed without loss of generality. Under narrower observation channels, in which the mean-field information is compressed before being observed by each agent, we show that the agent faces a POMDP rather than an MDP, even when its counterparts use symmetric policies.
Bio : Bora Yongacoglu is a post-doctoral fellow in the Department of Electrical and Computer Engineering at University of Toronto, where he studies learning in multi-agent systems. He received his PhD and MSc. degrees in mathematics from Queen's University, and his B.A. in mathematics and economics from McGill University.
Location
CIM
McConnell Building
McGill University
Montréal QC H3A 0E9
Canada