Smart Sensor Integration

Print
The component supports integration of a single or multiple sensors into multimedia applications. It allows a developer to quickly turn standard sensors, such as microphone, camera, or wiimote into "smart" sensors, presenting information in a form that meets the requirements of the application as effectively as possible. It configures a toolkit supporting wrapping of sensors devices, extracting in real time the information needed by an application, fusing information from multiple modalities to a single outcome. A graphical interface is provided to assist users in collecting their own training corpora and in personalising the usage of them.
Several libraries for affective input recognition have been incorporated into the toolkit, such as Real-time emotion recognition from speech.


Availability of the component: Reference contacts are Johannes Wagner and Elisabeth André of University of Augsburg
Notes: The component is developed under C++ and build as a static library. The core system including standard sensor devices, basic signal processing and machine learning pipeline. The source code is freely available under GNU Lesser General Public License (LGPL). The SSI/ModelUI (Graphical interface that assists a user to collect own training corpora and obtain personalized models to be used to train EmoVoice) is available for registered users  prior to downloading the code. The component makes use of the following software: oscpack -- Open Sound Control packet manipulation library ; KISS FFT -- A mixed-radix Fast Fourier Transform Copyright (c) 2003-6 Mark Borgerding; WiiYourself! -- native C++ Wiimote library v1.14 BET (c) gl.tter 2007-8 -  ; AVI utilities -- for creating avi files (c) 2002 Lucian Wischik. No restrictions on use.; Torch 3.1. -- A machine-learning library,Copyright (C) 2003-4 Ronan Collobert - ; TinyXML -- a simple, small, C++ XML parser copyright (c) 2000-2006 Lee Thomason. Documentation of the SSI framework includes a comprehensive tutorial with examples (update December 8, 2009).



A specific Callas Gesture Expressivity Experiment (CGEE) was performed in Greece, Germany and Italy, about the usage of SSI for synchronized recordings of gesture expressivity, emotional speech and mimics. 

The component has been used for corpus acquisition and audio analysis in the ElektroEmotion Research Tool, Scientific Showcases (the CommonTouch) and in Proof-of-Concepts applications (EUCLIDE and the AV laughter machine).
Additional features of the component are described in the following papers, articles and poster:

Last Updated on Friday, 02 July 2010 08:04