Graph representation learning: Recent advances and open challenges
William Hamilton – McGill University, Canada
Webinar link
Meeting ID: 910 7928 6959
Passcode: VISS
Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial if we want systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, most prominently in the development of graph neural networks (GNNs). Advances in GNNs have led to state-of-the-art results in numerous domains, including chemical synthesis, 3D-vision, recommender systems, question answering, and social network analysis. In the first part of this talk I will provide an overview and summary of recent progress in this fast-growing area, highlighting foundational methods and theoretical motivations. In the second part of this talk I will discuss fundamental limitations of the current GNN paradigm. Finally, I will conclude the talk by discussing recent progress my group has made in advancing graph representation learning beyond the GNN paradigm.
Bio: William (Will) Hamilton is an Assistant Professor in the School of Computer Science at McGill University, a Canada CIFAR AI Chair, and a member of the Mila AI Institute of Quebec. Will completed his PhD in Computer Science at Stanford University in 2018. He received the 2018 Arthur Samuel Thesis Award for best Computer Science PhD Thesis from Stanford University, the 2014 CAIAC MSc Thesis Award for best AI-themed MSc thesis in Canada, as well as an honorable mention for the 2013 ACM Undergraduate Researcher of the Year. His interests lie at the intersection of machine learning, network science, and natural language processing, with a current emphasis on the fast-growing subject of graph representation learning.
Location
Montréal Québec
Canada