gaips_bea image1 image2 image3 image4 image5 gaips_ecute_beach_bar_banner gaips_ecute_train_incorrect_ticket_banner
Learning by Appraising: An Emotion-based Approach for Intrinsic Reward Design


Abstract In this paper, we investigate the use of emotional information in the learning process of autonomous agents. Inspired in four dimensions commonly postulated by appraisal theories of emotions, we construct a set of reward features to guide the learning process and behavior of a reinforcement learning (RL) agent inhabiting an environment of which it has only limited perception. Much like what occurs in biological agents, each reward feature evaluates a particular aspect of the (history of) interaction of the agent history with the environment, in a sense replicating some aspects of appraisal processes observed in humans and other animals. Our experiments in several foraging scenarios show that, by optimizing the relative contributions of each reward feature, the resulting “emotional” RL agents attain superior performance than standard goal-oriented agents, particularly in face of their inherent perceptual limitations. Our results support our claim that biological evolutionary adaptive mechanisms such as emotions can provide a crucial clues in creating robust, general purpose reward mechanisms for autonomous artificial agents, allowing them to overcome some of the challenges imposed by their inherent limitations.
Year 2014
Keywords Affective Computing;Reinforcement Learning;
Authors Pedro Sequeira, Francisco S. Melo, Ana Paiva
Journal Adaptive Behavior
Volume 22
Number 5
Pages 330-349
Month October
Pdf File \"pdf
BibTex bib icon or see it here down icon

@article { sequeira14, abstract = {In this paper, we investigate the use of emotional information in the learning process of autonomous agents. Inspired in four dimensions commonly postulated by appraisal theories of emotions, we construct a set of reward features to guide the learning process and behavior of a reinforcement learning (RL) agent inhabiting an environment of which it has only limited perception. Much like what occurs in biological agents, each reward feature evaluates a particular aspect of the (history of) interaction of the agent history with the environment, in a sense replicating some aspects of appraisal processes observed in humans and other animals. Our experiments in several foraging scenarios show that, by optimizing the relative contributions of each reward feature, the resulting “emotional” RL agents attain superior performance than standard goal-oriented agents, particularly in face of their inherent perceptual limitations. Our results support our claim that biological evolutionary adaptive mechanisms such as emotions can provide a crucial clues in creating robust, general purpose reward mechanisms for autonomous artificial agents, allowing them to overcome some of the challenges imposed by their inherent limitations.}, journal = {Adaptive Behavior}, keywords = {Affective Computing;Reinforcement Learning;}, month = {October}, number = {5}, pages = {330-349}, title = {Learning by Appraising: An Emotion-based Approach for Intrinsic Reward Design}, volume = {22}, year = {2014}, author = {Pedro Sequeira and Francisco S. Melo and Ana Paiva} }

up icon hide this content