Video Feature Extraction

Print
The component is extracting faces from video sequence or live camera feed, to derive information about the emotional state, content and context, keeping track of the amount of people looking towards the camera, and deriving interesting cues about the state of the audience’s interest. See poster.
Video analysis in typical applications for surveillance, multimedia content analysis and medical imaging usually concentrate on recognition of gestures, head tracking, pose/gaze and facial expression from video frames. The Video Feature Extraction CALLAS component focus instead on indicators of level of interest or enthusiasm of an audience participating to an event or installation: it uses face detection for counting and tracking people, as well as orientation of the head for head movement information and it includes also quantitative movement analysis. Processing is performed in real time by balancing the computational load between all the time consuming image processing tasks.


Availability of the component: Reference contacts are Markus Niiranen  and Tommi Keränen of VTT Technical Research Centre of Finland.
Notes: The component will be made available in its compiled (32-bit Windows executable) form for non-commercial use upon request.


Besides functional tests, extensive experimentation in CALLAS is done in Scientific Showcases: The component is also used in the Emotional Character application, this  Proof-of-Concept  for edutainment analyzes input from camera in real time and for the whole length of the event, from the affective puppet character point of view to trace all faces framed.

Last Updated on Monday, 28 June 2010 12:26