Ad hoc multimodal semantic fusion components

Print
The component combines affective results from other components into a dimensional PAD (Pleasure-Arousal-Dominance) model, offering an overall affective representation of user interactions.
The PAD-based fusion approach adopted in CALLAS integrates output at the decision level, based on an emotional model, appropriate for affective interfaces and it  considers how emotions evolve over time in order to avoid implausible jumps between emotional states.


Availability of the component: Reference contact is Marc Cavazza of Teesside University
Notes: The component is at prototype level. It may be available upon request.


Ad hoc multimodal semantic fusion components have been applied to the fusion of Multi Keyword Spotting , real time emotion recognition from speech and video feature extraction components, all included in the e-Tree augmented reality art installation Scientific Showcase.

Reference to the component and its usage can be found in the following CALLAS papers and articles:

PAD-based Multimodal Affective Fusion: Abstract
Using Affective Trajectories to Describe States of Flow in Interactive Art:
Abstract
An Affective Model of User Experience for Interactive Art:
Abstract

Last Updated on Monday, 17 May 2010 13:08