post

Perception module

The perception module supports multiple sensors such as the Microsoft Kinect V1 and V2, Q sensor, OKAO SDK for facial expressions and multiple video recorders in order to log and interpret the sensory information in a synchronised manner at a specific sampling rate set from the interface. Additionally, it captures HD (1080p) video directly from the Kinect V2 sensor which is also synchronised with the log files. It can be used to capture human robot interactions and general activities in a close proximity with the sensors. This module is based on Thalamus architecture and on the client-server model for external modules. If it is used with Thalamus then the capture can be initiated with a start message and supports proper file naming and tagging. Otherwise the user can start the recording my pressing the Start button.

perception manager

Figure 1: Perception module interface

The Perception module interface has controls to allow the user to select the active scenario (1 or 2 users), the enabled sensors and generally monitor the sensor readings in real time.

Perception Architecture

The Perception module retrieves data from multiple sensors and analyses them in real time in order to detect user’s actions. The new Kinect and Q Sensor modules have been embedded inside the Perception module in order to reduce the number of external messages and the chance of dropping any data. The Perception module logs all the incoming data in a synchronised manner (synchronised with a Thalamus message) and stores them at a specific sampling rate set from the interface.

perception module architecture

Figure 2: Perception module architecture

This module has been developed using C# for windows. It has been used in multiple studies in EMOTE project to log sensory data for later offline analysis. It captured Electrodermal activity from the Q Sensor, HD videos from the Kinect sensor, Facial action units, body position, head position, user gaze and finally user’s expressions. All of these information has been later analysed by psychologists.

Logging structure details:

Kinect:

Time , FaceRotation X(degrees) , FaceRotation Y(degrees) , FaceRotation Z(degrees) , FaceProperty.Happy(True, False, Maybe, Unknown) , FaceProperty.MouthMoved(True, False, Maybe, Unknown) , Head Position.X , Head Position.Y , Head Position.Z , Lean.X(degrees) , Lean.Y(degrees) , LeanTrackingState(True, False, Maybe, Unknown), JawOpen , JawSlideRight , LeftcheekPuff , LefteyebrowLowerer , LefteyeClosed , LipCornerDepressorLeft , LipCornerDepressorRight , LipCornerPullerLeft , LipCornerPullerRight , LipPucker , LipStretcherLeft , LipStretcherRight , LowerlipDepressorLeft , LowerlipDepressorRight , RightcheekPuff , RighteyebrowLowerer , RighteyeClosed

QSensor:

Time , AccelZ , AccelX , AccelY , SkinTemperature, EDA

OKAO:

Time , userID(1-2) , ConfidenceVal(0-1000) , SmileVal(0-100) , Neutral(0-100) , Happiness(0-100) , Surprise(0-100) , Fear(0-100) , Anger(0-100) , Disgust(0-100) , Sadness(0-100), FaceUpDown(degrees) , FaceLeftRight(degrees) , GazeDirection (labelled)

Download instructions

Before running the Perception module you need to install Kinect SDK in case you want to use the sensor. The SDK can be found from Microsoft website for free at: https://www.microsoft.com/en-us/download/details.aspx?id=44561

Download the latest DIVX codec from www.divx.com (its free)

Perception accepts 2 command line options, the first argument is the thalamus scenario and the second the scenario number as integer (1 or 2) and default is 1 e.g. perception emote 2, it will load perception on emote and using scenario 2

Executable:

http://gaips.inesc-id.pt/emote/downloads/perception-module-executable/

Download the zip file and extract it to a folder. Run Perception.exe

Source code:

http://gaips.inesc-id.pt/emote/downloads/perception-module-source-code/ 

Download the zip file and extract in to a folder.

Extract the libs.rar into your Bin folder for opencv files and the missing audio file

Open the solution file with Visual Studio