• Increase font size
  • Default font size
  • Decrease font size

Multi Keyword Spotting

E-mail Print
The MultiKeyword spotting component recognizes when one of a pre-defined set of utterances  useful to select different paths in an application or to evaluate the users’ feeling, occurs in speech, indirectly driving application changes. It is speaker-independent and can be run in automatic or push-to-talk mode. The list of words to be recognized as well as the language can be changed at runtime.

Availability of the component: Reference contact is Jerome Urbain of Universite' de Mons.
Notes: ACAPELA S.A is the reference licensor of the component supporting also commercial deployments.

Besides functional tests, extensive experimentation in CALLAS is done in Scientific Showcases
  • the e-Tree : in this Augmented Reality art installation the component  recognises pre- defined sequences of words and confidence rates.
  • the CommonTouch (collective empathic navigation of slogans on a public multitouch screen): the component is used for slogan’s vocabulary said out loud as well as affective vocabulary in the users’ speech
The component is also used in Proof-of-Concepts applications for edutainment and in the EUCLIDE:
  • the Musickiosk and Interactive Opera use the component for selection of characters, activation and restarting, with emotional keywords weighted and combined with other input modalities into P-A-D values for emotion estimation.
  Reference to the component and its usage in CALLAS can be found in the following papers:

Using Affective Trajectories to Describe States of Flow in Interactive Art: Abstract
PAD-based Multimodal Affective Fusion:
E-Tree: Emotionally Driven Augmented Reality Art:
An Emotionally Responsive AR Art Installation:
Developing Affective Intelligence For An Interactive Installation:Insights From A Design Process:
Input and output of the CALLAS MultiKeyword Spotting component
Last Updated on Thursday, 01 July 2010 18:44