In this paper, we investigate the use of emotional information in the learning process of autonomous agents. Inspired in four dimensions commonly postulated by appraisal theories of emotions, we construct a set of reward features to guide the learning process and behavior of a reinforcement learning (RL) agent inhabiting an environment of which it has only limited perception. Much like what occurs in biological agents, each reward feature evaluates a particular aspect of the (history of) interaction of the agent history with the environment, in a sense replicating some aspects of appraisal processes observed in humans and other animals. Our experiments in several foraging scenarios show that, by optimizing the relative contributions of each reward feature, the resulting “emotional” RL agents attain superior performance than standard goal-oriented agents, particularly in face of their inherent perceptual limitations. Our results support our claim that biological evolutionary adaptive mechanisms such as emotions can provide a crucial clues in creating robust, general purpose reward mechanisms for autonomous artificial agents, allowing them to overcome some of the challenges imposed by their inherent limitations.
|Note: Accepted manuscript version.|
|Videos of the experiments.|