Multi Keyword Spotting

Print
The MultiKeyword spotting component recognizes when one of a pre-defined set of utterances  useful to select different paths in an application or to evaluate the users’ feeling, occurs in speech, indirectly driving application changes. It is speaker-independent and can be run in automatic or push-to-talk mode. The list of words to be recognized as well as the language can be changed at runtime.


Availability of the component: Reference contact is Jerome Urbain of Universite' de Mons.
Notes: ACAPELA S.A is the reference licensor of the component supporting also commercial deployments.


Besides functional tests, extensive experimentation in CALLAS is done in Scientific Showcases
The component is also used in Proof-of-Concepts applications for edutainment and in the EUCLIDE:
  Reference to the component and its usage in CALLAS can be found in the following papers:

Using Affective Trajectories to Describe States of Flow in Interactive Art: Abstract
PAD-based Multimodal Affective Fusion:
Abstract
E-Tree: Emotionally Driven Augmented Reality Art:
Abstract
An Emotionally Responsive AR Art Installation:
Abstract
Developing Affective Intelligence For An Interactive Installation:Insights From A Design Process:
Abstract
Input and output of the CALLAS MultiKeyword Spotting component
Last Updated on Thursday, 01 July 2010 18:44