Data-driven and learning-based methods for control: Challenges and new horizons
Rahul Jain – University of Southern California, États-Unis
Lien pour le webinaire
ID de réunion : 910 7928 6959
Code secret : VISS
Use of data-driven and learning methods for offline control design and real-time online control has become imperative for modern control systems that are increasingly complex, expected to operate in uncertain, non-stationary environments with a high degree of autonomy while meeting various operational and safety requirements. In this talk, I will give a personal perspective of the challenges and recent advances made on learning and control problems. I will first talk about some of my contributions to offline reinforcement learning for continuous (state and action space) control problems when a generative model is available. We will see an application in robotics. I will then talk about recent developments and my contributions to online reinforcement learning for control in various contexts: infinite and random horizon, partial observability and multi-agent settings. The focus will be on “regret” as a key measure of learning efficiency. Finally, I will talk very briefly about control design when stringent safety specifications expressed in temporal logic languages must be satisfied, both when the model is known and when it is unknown. Finally, I will conclude with a discussion of new horizons especially when driven by intelligent autonomy applications in robotics and other autonomous systems.
Biography: Rahul Jain is Professor of Electrical and Computer Engineering, Computer Science* and ISE* (*by courtesy) at the University of Southern California (USC). He received a B.Tech from the IIT Kanpur, and an MA in Statistics and a PhD in EECS from the University of California, Berkeley. Prior to joining USC, he was in the Mathematical Sciences division at the IBM T J Watson Research Center, Yorktown Heights, NY. He is currently a Founding Director of the industry-sponsored USC Center for AI and Autonomy. He has received numerous awards including the NSF CAREER award, the ONR Young Investigator award, an IBM Faculty award, the James H. Zumberge Faculty Research and Innovation Award, and has been a US Fulbright Scholar. His interests span stochastic control, reinforcement and statistical learning, stochastic networks, and game theory, and energy systems and autonomous robotics on the applications side. The talk is based on joint work with a number of outstanding students and postdocs who are now themselves well-regarded academics.
Lieu
Montréal Québec
Canada