gaips_bea image1 image2 image3 image4 image5 gaips_ecute_beach_bar_banner gaips_ecute_train_incorrect_ticket_banner
Autonomous Agents and Multiagent Systems, AAMAS 2016 Workshops Best Papers


Abstract Fairness plays a determinant role in human decisions and definitely shapes social preferences. This is evident when groups of individuals need to divide a given resource. Notwithstanding, computational models seeking to capture the origins and effects of human fairness often assume the simpler case of two person interactions. Here we study a multiplayer extension of the well-known Ultimatum Game. This game allows us to study fair behaviors in a group setting: a proposal is made to a group of Responders and the overall acceptance depends on reaching a minimum number of individual acceptances. In order to capture the effects of different group environments on the human propensity to be fair, we model a population of learning agents interacting through the multiplayer ultimatum game. We show that, contrarily to what would happen with fully rational agents, learning agents coordinate their behavior into different strategies, depending on factors such as the minimum number of accepting Responders (to achieve group acceptance) or the group size. Overall, our simulations show that stringent group criteria leverage fairer proposals. We find these conclusions robust to (i) asynchronous and synchronous strategy updates, (ii) initially biased agents, (iii) different group payoff division paradigms and (iv) a wide range of error and forgetting rates.
Year 2016
Keywords Game Theory;Multi-Agent Societies;Miscellaneous;Reinforcement Learning;
Authors Fernando P. Santos, Francisco C. Santos, Francisco S. Melo, Ana Paiva, Jorge M. Pacheco
Publisher Springer International Publishing
Chapter Dynamics of Fairness in Groups of Autonomous Learning Agents
Volume 10002
Series Lecture Notes in Computer Science
Pages 107-126
Pdf File \"pdf
BibTex bib icon or see it here down icon

@inbook { santos16, abstract = {Fairness plays a determinant role in human decisions and definitely shapes social preferences. This is evident when groups of individuals need to divide a given resource. Notwithstanding, computational models seeking to capture the origins and effects of human fairness often assume the simpler case of two person interactions. Here we study a multiplayer extension of the well-known Ultimatum Game. This game allows us to study fair behaviors in a group setting: a proposal is made to a group of Responders and the overall acceptance depends on reaching a minimum number of individual acceptances. In order to capture the effects of different group environments on the human propensity to be fair, we model a population of learning agents interacting through the multiplayer ultimatum game. We show that, contrarily to what would happen with fully rational agents, learning agents coordinate their behavior into different strategies, depending on factors such as the minimum number of accepting Responders (to achieve group acceptance) or the group size. Overall, our simulations show that stringent group criteria leverage fairer proposals. We find these conclusions robust to (i) asynchronous and synchronous strategy updates, (ii) initially biased agents, (iii) different group payoff division paradigms and (iv) a wide range of error and forgetting rates.}, chapter = {Dynamics of Fairness in Groups of Autonomous Learning Agents}, keywords = {Game Theory;Multi-Agent Societies;Miscellaneous;Reinforcement Learning;}, pages = {107-126}, publisher = {Springer International Publishing}, series = {Lecture Notes in Computer Science}, title = {Autonomous Agents and Multiagent Systems, AAMAS 2016 Workshops Best Papers}, volume = {10002}, year = {2016}, author = {Fernando P. Santos and Francisco C. Santos and Francisco S. Melo and Ana Paiva and Jorge M. Pacheco} }

up icon hide this content