CALLAS

  • Increase font size
  • Default font size
  • Decrease font size
Home

Emotional Attentive ECA

E-mail Print

This ECA component is a real-time 3D female agent (called GRETA) which supports interaction with a user trough a rich palette of verbal and non-verbal behaviours.

Communicative intentions of the speaker are rendered by talking and simultaneously showing facial expressions, gestures, gaze, and head movements. GRETA has different ways to interact with a user: she can talk and simultaneously show facial expressions, gestures, gaze, and head movements. Version the emotional states of the agent can be defined using PAD dimensional model of emotions. Animation of the agent is possible with real data coming from motion capture and pre-registered audio files.
This type of ECAs can be adapted to fit into the role of an educational tutor, a learning assistant, a conversational mate or a virtual companion and could also be implicated in the creation of virtual characters for video and serious games.


Availability of the component: Reference contact are Catherine Pelachaud and Radoslaw Niewiadomski  of Telecom Paristech
Notes: The component is publicly available under GPL. Greta Real-time 2009 works on Windows XP. The source code is available for Visual Studio 2005. The component (executables and source code) can be downloaded from here. The following additional free software are required: Psyclone software ; Text to Speech Synthesiser TTS 3.6 ; DirectX9c ; Microsoft Visual C++ 2005 SP1 Redistributable Package. The real-time version of Greta can be tested by using a simple Java program called interface available here.
Complete documentation about GRETA is available and a discussion forum is open to debate and support questions. Some examples can be downloaded.

Besides functional tests, extensive experimentation of the component in CALLAS is done in Proof-of-Concepts applications:
  • the Interactive Storytelling: a mock-up scenario used the component for Affective Interactive Narrative, and Emotional Interactive Storyteller System application used the ECAs as an inductor system: using facial and body behaviours to share an emotional state with the user, inviting the user to comment about images of displayed scenes, conveying the story of the scene to the user in a turn-based interaction that continues through to the end of the story. See also article in CALLAS Newsletter2.
  • the AVLaughter Machine : in conjunction with Smart Sensor Integration component and Acoustic Awareness component, a recorded laughter is mapped to find an appropriate answer inside a laughter corpus and used to drive Greta with the selected utterance. Greta then plays the audio synchronously with the facial motions of the selected laughter. See also the CALLAS training session at eNTERFACE 2009 and the related paper:
AVLaughterCycle: An audiovisual laughing machine Abstract

Suggested reading are the following articles and papers, plus reference to the component usage in CALLAS Newsletter4.
 
  • Model of Facial Expressions Management for an Embodied Conversational Agent: Abstract
  • GRETA: an interactive expressive ECA system: Abstract
  • Reactive behaviors in SAIBA architecture: Abstract
  • Evaluation of Multimodal Sequential Expressions of Emotions in ECA: Abstract
  • Towards a Real-time Gaze-based Shared Attention for a Virtual Agent: Abstract
  • GRETA: Towards an Interactive conversational virtual companion: Abstract
  • GRETA, une plateforme d'agent conversationnel expressif et interactif: Abstract
  • Modelling multimodal expression of emotion in a virtual agent: Abstract
  • Dynamic behavior qualifiers for conversational agents: Abstract
  • Searching for Prototypical Facial Feedback Signals: Abstract
  • Modeling emotional expressions as sequences of behaviors: Abstract
  • A listening agent exhibiting variable behaviour: Abstract
  • Expressions of empathy in ECAs: Abstract
  • Using Facial Expressions to Display Empathy in ECAs: Abstract
  • Introducing Multimodal sequential emotional expressions for virtual characters: Abstract
  • Modelisation des expressions faciales des emotions: Abstract
  • GRETA: Towards an Interactive Conversational Virtual Companion: Abstract

Watch the video of GRETA here.

Participation to experimental evaluation is welcome: specific aspects are  smiling ECA and  emotional expressions.
Last Updated on Friday, 02 July 2010 10:54