Limitations of Large Language Models
Sarath Chandar – Polytechnique Montréal, Canada
Large language models (LLMs) are becoming increasingly used in various downstream applications not only in natural language processing but also in various other domains including computer vision, reinforcement learning, and scientific discovery to name a few. This talk will focus on the limitations of using LLMs as task solvers. What are the effects of using LLMs as task solvers? What kind of knowledge can an LLM encode (and also what it cannot encode)? Can they efficiently use all the encoded knowledge while learning a downstream task? Are LLMs susceptible to the usual catastrophic forgetting while learning many tasks? How do we identify the biases that these LLMs encode and how do we eliminate those biases? In this talk, I will present an overview of several research projects in my lab that attempt to answer all these questions. This talk will bring to light some of the current limitations of LLMs and how to move forward to build more intelligence systems.
Lieu
Pavillon André-Aisenstadt
Campus de l'Université de Montréal
2920, chemin de la Tour
Montréal Québec H3T 1J4
Canada