post

Scenario 2: Sustainability learning with EnerCities

In the EMOTE project, we have developed a collaborative learning activity in which students interact and learn with a robot about sustainable development topics. The activity chosen for this learning environment is a serious game called EnerCities in which three players (including the empathic robotic tutor) need to collaborate in order to build a sustainable city together. While playing EnerCities with the robotic tutor, students can learn concepts related with pollution issues, energy shortages or renewable energy. The ultimate goal is to raise awareness of sustainable problems and possible solutions through the interaction with the empathic robotic tutor using the EnerCities game.

The original online single player game of EnerCities was adapted in the EMOTE project to a multiplayer version in which children together with the empathic robotic tutor are players in the game. They interact and play the game through a multi-touch table, providing an engaging and fun environment to learn. The idea of transforming EnerCities into a multiplayer version with three players was to stimulate collaborative learning about sustainable development. In this game, the empathic robotic tutor acts as a peer and player in the game, fostering children’s understanding about sustainability awareness.

The behavior of the robot was iteratively designed together with teachers and children. A set of user-studies with these stakeholders enabled us to develop a robot that uses pedagogical strategies to foster the interaction, while keeping children engaged in the learning content. At the same time, the robot behaves empathically, leading to a personalised way of teaching – and henceforth, of learning.

This learning activity was developed to sustain long-term educational interactions with our empathic robotic tutor. During the time that children spend with the robot, they are invited to build a city together that requires to be sustainable in order to grow. During this process, the robot invites children to discuss about their actions in the game. For example, the robotic tutor prompts students to decide on what is needed for the city in terms of environmental, economical and wellbeing needs of the citizens. As the sessions progress, students become aware that sustainability is a complex construct that involves balancing different resources and indicators in order to develop the city. As time goes by, students and the robot try new strategies to solve sustainable problems, such as the implementation and creation of new policies for the city.

 

Video

The following video provides an overview of the EnerCities dynamics:

 

Evaluating EnerCities with learners

EnerCities with the autonomous empathic robotic tutor was tested and evaluated in real schools in Portugal and Sweden. We have performed both long-term and short-term evaluations that lead to interesting findings regarding robots in education. Overall, the studies have shown that children are engaged in the learning process during the educational sessions with the robotic tutor. Moreover, their notions about sustainability change over time, leading to different choices in the actions they perform while playing the serious game. They began more aware towards collaborative choices for the sustainable city and start to incorporate decisions that strive for balanced indicators.

This suggests that the robot “does its job” as it engages students in the learning content while managing group dynamics and adequates the pedagogical strategies and content according to their performance.

 

Future directions

The experience of bringing the collaborative scenario of EMOTE into schools lead to two different understandings: that robots can play an important role in education as interactive technologies for learning; and that schools are receptive to the integration of new ways of engaging students in learning activities. Therefore, we foresee the inclusion of robots in classrooms in the near future as important pedagogical tools.

In that regard, the developments made within the EMOTE project provide the research community with rich and complex architecture systems that can be reused in further work considering educational robotics. As a consequence, we hope to build upon these developments to target new applications for robotics in education.

 

 

EMOTE Empathic Robot Tutor Technical Description

Hardware

Hardware required for the EMOTE Empathic Robot Tutor:

  • Touch table or touch device (18 inch touch tablet used for demonstration)
  • NAO Robot torso from Aldebaran (height 30cm, width 30 cm)
  • Lan cables, ethernet router.
  • Microsoft Kinect V2 sensor
  • Laptop with at least i5 cpu
  • Stereo microphone
  • Web camera with at least 640×480 resolution
  • Affectiva Q Sensor for electro-dermal activity
  • Bluetooth receiver

Sensors

The array of sensors is composed of:

  • Microsoft Kinect v.2:
    • Head position X,Y,Z
    • Head direction X,Y
    • Facial action units
  • Webcam + Omron’s OKAO image processing suite:
    • Face position X,Y
    • Head direction angles
    • Eye gaze angles
    • Smile estimation and detection
    • Face expressions: anger, disgust, fear, joy, sadness, surprise, neutral
  • Stereo microphone:
    • Direction of the detected noises (left or right)
  • Multitaction touch screen:
    • Screen coordinates relative to the last touch the user did on the touch screen
  • Q-Sensors

 

Making sense of Sensory Data

The sensors’ output is handled by the Perception and the Emotional Climate modules, which allows the system to generate the following high-level user awareness, and the assumed affective state of the users:

  • Users’ gaze direction: user gazing to robot, to left part of screen, to right part of screen, elsewhere;
  • User pointing to screen;
  • User touching chin;
  • Emotional Climate.

The Perception module interprets the sensory data from the sensors with which the Tutor platform is equipped. Sensors include the OKAO vision, which contributes information about facial expressions, the Q-sensor, which outputs electro-dermal data, and the Kinect 2 sensor mostly for skeleton data.

The Emotional Climate takes as input the Perception module’s OKAO related messages, i.e., both subjects’ facial expressions and gaze orientation, in order to evaluate the emotions of the group of students. The purpose was for the robotic tutor to interact with groups of students in an emotionally intelligent manner.

 

Control

The Empathic Robotic Tutor autonomously selects an appropriate pedagogical strategy during the game according to the users’ actions and dynamically responds to the users’ affective state changes. Its architecture is detailed in the following figure:

The EnerCities is the interface of the game that provides tools for users to interact with the system. The Learner Model saves the history of users’ actions and assesses the student’s task actions/performance. The GameAI is a collaborative artificial intelligence (AI) module for the turn-based, multiplayer version of the game EnerCities. This module is capable of informing the game-playing and pedagogical decision-making of the robotic tutor. It is used to inform the tutor about important aspects of the game state and also the human players’ actions. The Interaction Manager decides how the interactive behaviour is established with the users. It uses the outputs generated by different modules in order to produce behaviours that are based on pedagogical strategies. Finally, Skene translated the produced high-level intentions into of atomic behavior actions (e.g. speech, gazing, gesture) to be performed by the robot.

The connection and communication process between all the aforementioned modules, as well as their modular development, was able due to the usage of the Thalamus framework.

 

Related publications

  • Sequeira, P., Alves-Oliveira, P., Ribeiro, T., Di Tullio, E., Petisca, S., Melo, F. S., Castellano, G., & Paiva, A. (2016). Discovering Social Interaction Strategies for Robots from Restricted-Perception Wizard-of-Oz Studies. In the 11th Annual Conference for basic and applied human-robot interaction
  • Ribeiro, T., Pereira, A., Di Tullio, E., & Paiva, A. (2016). The SERA ecosystem: Socially Expressive Robotics Architecture for autonomous human-robot interaction. In the AAAI 2016 Spring Symposium on Enabling Computing Research in Socially Intelligent Human-Robot Interaction. Association for the Advancement of Artificial Intelligence, Palo Alto.
  • Sequeira, P., Melo, F. S., & Paiva, A. (2015, August). “Let’s save resources!”: A dynamic, collaborative AI for a multiplayer environmental awareness game. In Computational Intelligence and Games (CIG), 2015 IEEE Conference on (pp. 399-406). IEEE.
  • Jones, A., Küster, D., Basedow, C. A., Alves-Oliveira, P., Serholt, S., Hastie, H., Lee, C., Barendregt, W., Kappas, A., Paiva, A., & Castellano, G. (2015). Empathic Robotic Tutors for Personalised Learning A Multidisciplinary Approach. In Social Robotics (pp. 285-295). Springer International Publishing.
  • Alves-Oliveira, P., & Paiva, A. (2015). Challenges in Child-Robot Interaction: The cases of two research projects. ICSR 2015 Workshop on child-robot interaction evaluation. Paris, France.
  • Srinivasan Janarthanam, Helen Hastie, Amol Deshmukh, Ruth Aylett and Mary Ellen Foster (2015). A Reusable Interaction Management Module: Use case for Empathic Robotic Tutoring. In Proceedings of SemDial, 2015
  • Sofia Serholt, Wolmet Barendregt, Iolanda Leite, Helen Hastie, Aidan Jones, Ana Paiva, Asimina Vasalou, Ginevra Castellano (2014). Teachers’ Views on the Use of Empathic Robotic Tutors in the Classroom. In Proceedings of Ro-man’14
  • Tiago Ribeiro, Patrícia Alves-Oliveira, Eugenio Di Tullio, Sofia Petisca, Pedro Sequeira, Amol Deshmukh, Srinivasan Janarthanam, Mary Ellen Foster, Aidan Jones, Lee J Corrigan, Fotios Papadopoulos, Helen Hastie, Ruth Aylett, Ginevra Castellano and Ana Paiva. The Empathic Robotic Tutor: Featuring the NAO Robot, Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts Pages 285-285
  • Patrícia Alves-Oliveira, Sofia Petisca, Srinivasan Janarthanam, Helen Hastie, Ana Paiva (2014). How do you imagine robots? Children’s expectations about robots, In Proceedings of Child-Robot Interaction Workshop, Interaction Design and Children Conference 2014, Denmark
  • Patricia Oliveira, Srinivasan Janarthanam, Ana Margarida Candeias, Amol Deshmukh, Tiago Ribeiro, Helen Hastie, Ana Paiva, Ruth Aylett. (2014). Towards Dialogue Dimensions for a Robotic Tutor in Collaborative Learning Scenarios. In Proceedings of Ro-man’14
  • Aylett, R., Kappas, A., Castellano, G., Bull, S., Barendregt, W., Paiva, A. and Hall, L. (2015). I Know How That Feels – An Empathic Robot Tutor. eChallenges e-2015 Conference Proceedings – In Print. Download: Link to follow
  • Alves-Oliveira, P., Sequeira, P., Di Tullio, E., Petisca, S., Guerra, C., Melo, F. S., & Paiva, A. (2015).It’s amazing, we are all feeling it!” Emotional climate as a group-level emotional expression in HRI. In 2015 AAAI Fall Symposium Series. Washington, D.C.
  • Alves-Oliveira, P., Ribeiro, T., Petisca, S., Di Tullio, E., Melo, F. S., & Paiva, A. (2015). An Empathic Robotic Tutor for School Classrooms: Considering Expectation and Satisfaction of Children as End-Users. In Social Robotics (pp. 21-30). Springer International Publishing.
  • Ribeiro, T., Di Tullio, E., Corrigan, L. J., Jones, A., Papadopoulos, F., Aylett, R., Castellano, G. and Paiva, A. (2014). Developing Interactive Embodied Characters Using the Thalamus Framework: A Collaborative Approach. In Intelligent Virtual Agents (pp. 364-373). Springer International Publishing.
  • Aylett, R., Barendregt, W., Castellano, G., Kappas, A., Menezes, N., & Paiva, A. (2014). An Embodied Empathic Tutor. In 2014 AAAI Fall Symposium Series.
  • Ribeiro, T., Pereira, A., Deshmukh, A., Aylett, R., & Paiva, A. (2014). I’m the Mayor – a Robot Tutor in Enercities-2. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (pp. 1675-1676). International Foundation for Autonomous Agents and Multiagent Systems.
  • Ribeiro, T., Pereira, A., Di Tullio, E., Alves-Oliveira, P., & Paiva, A. (2014). From Thalamus to Skene – High-level behaviour planning and managing for mixed reality characters. In Proceedings of the IVA 2014 Workshop on Architectures and Standards for IVAs.
  • Ribeiro, T. & Paiva, A. (2014). Make Way for the Robot Animators! Bringing Professional Animators and AI Programmers Together in the Quest for the Illusion of Life in Robotic Characters. In 2014 AAAI Fall Symposium Series.
  • Alves-Oliveira, P., Di Tullio, E., Ribeiro, T., & Paiva, A. (2014). Meet Me Half Way – Eye Behaviour as an Expression of Robots Language. In 2014 AAAI Fall Symposium Series.
  • Ribeiro, T., Paiva, A., & Dooley, D. (2013). Nutty Tracks: Symbolic Animation Pipeline for Expressive Robotics. In ACM SIGGRAPH 2013 Posters(p. 8). ACM.
  • Castellano, G., Paiva, A., Kappas, A., Aylett, R., Hastie, H., Barendregt, W., Nabais, F. and Bull, S. (2013). Towards Empathic Virtual and Robotic Tutors. In Artificial Intelligence in Education (pp. 733-736). Springer Berlin Heidelberg.
  • Deshmukh, A., Castellano, G., Kappas, A., Barendregt, W., Nabais, F., Paiva, A., Ribeiro, T., Leite, I. and Aylett, R. (2013). Towards Empathic Artificial Tutors. In Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction (pp. 113-114). IEEE Press.
  • Ginevra Castellano, Ana Paiva, Arvid Kappas, Ruth Aylett, Helen Hastie, Wolmet Barendregt, Fernando Nabais and Susan Bull (2013) Towards Empathic Virtual and Robotic Tutors. In Proceedings of the 16th Conference on Artificial Intelligence in Education (AIED), Memphis, Tennessee, USA