Retour aux activités
Séminaire informel de théorie des systèmes (ISS)

Social learning under behavioral assumptions

iCalendar

15 jan. 2021   14h00 — 15h00

Rabih Salhab Institute for Data, Systems, and Society, MIT, États-Unis

Rabih Salhab

Lien pour le webinaire
Nº du webinaire : 910 7928 6959
Code secret: VISS

I will present two recent works on social learning under behavioral assumptions. The first is Social Learning with Sparse Belief Samples. In this work, we introduce a non-Bayesian model of learning over a social network where a group of agents with insufficient and heterogeneous sources of information share their experiences to learn an underlying state of the world. Inspired by a recent body of research in cognitive science on human decision making, we presume two behavioral assumptions. Motivated by the coarseness of communication, our first assumption posits that agents only share samples taken from their belief distribution over the set of states, to which we refer as their actions. This situation is to be contrasted with that of sharing the full belief, i.e. probability distribution over the entire set of states. The second assumption is limited cognitive power, based on which individuals incorporate their neighbors’ actions into their beliefs following a simple DeGroot-like social learning rule which suffers from redundancy neglect and imperfect recall of the past history. We show that so long as all the individuals trust their neighbors’ actions more than their private signals, they may end up mislearning the state with positive probability. Learning, on the other hand, requires that the population includes a group of self-confident experts in different states. This means that for each state, there is an agent whose signaling function for her state of expertise is distinguishable from the convex hull of the remaining signaling functions, and that her private signals sufficiently weigh in her social learning rule. This is a joint work with Amir Ajorlou and Ali Jadbabaie.

The second work is Social Learning with Unreliable Agents and Self-reinforcing Stochastic Dynamics. We consider a group of agents that have fixed unobservable binary ``beliefs’’. An individual’s belief models for example their political support (Democrat or Republican). At each time period, agents broadcast binary opinions on a social network. We assume that individuals may lie and declare opinions different from their true beliefs to conform with their neighbors. This raises the natural question as to whether one can estimate the agents’ true beliefs from observations of declared opinions. We analyze this question in the special case of complete graph. We show that, as long as the population does not include large majorities, estimation of aggregate true belief and individual true beliefs is possible. On the other hand, large majorities force minorities to lie as time goes to infinity, which makes asymptotic estimation impossible. This is a joint work with Anuran Makur, Ali Jadbabaie, and Elchanan Mossel.


Biography: I'm currently a Postdoctoral Associate at the MIT Institute for Data, Systems, and Society (IDSS) hosted by Prof. Ali Jadbabaie. From 2018 to 2019, I was a Postdoctoral Fellow (IVADO) at HEC Montreal hosted by Prof. Georges Zaccour. I finished my Ph.D. degree in Electrical Engineering in the Department of Electrical Engineering, Ecole Polytechnique de Montreal, Canada, under the supervision of Prof. Roland Malhamé and Jerome Le Ny in April 2018. I received the B.S. degree in Electrical Engineering from Ecole Superieure d'Ingenieurs de Beyrouth (E.S.I.B), Lebanon, in 2008. From 2008 to 2013, I was an Electrical Engineer with Dar al Handasah Shair and Partners, Lebanon.

Peter E. Caines responsable
Aditya Mahajan responsable
Shuang Gao responsable
Rinel Foguen Tchuendom responsable
Yaroslav Salii responsable

Lieu

Webinaire
Zoom
Montréal Québec
Canada

Axes de recherche

Applications de recherche