Video-Based Gesture Expressivity Features Extraction

Print
This video-based component detects and tracks the user’s hands to extract and transmit expressivity features’ values such as overall activation, spatial extent, temporal, fluidity, power. The processing applies to low resolution videos to identify gestures representative of spontaneous emotional behaviors. The component handles issues related to environment, context and lighting condition changes. Playback, pre-recorded capabilities and relevant features can be set and adapted to support offline processing of affectively enriched human behaviour.


Availability of the component: Reference contact is Amaryllis Raouzaiou of Institute of Communication and Computer Systems - National Technical University of Athens.
Notes: Interested parties can request the component who will be made available for non commercial use.


Besides functional tests, extensive experimentation of Video-Based Gesture Expressivity Feature extraction component in CALLAS is done in:
Last Updated on Monday, 17 May 2010 12:20