Affective Music Synthesis

Print
The component renders the user’s emotive state by real-time generation of an affectivisation of music. Some key characteristics of the music are altered in response to the changing mood of the user psychologically correlated and expressed in the PAD model. Processing is done in real-time with a view to suit the composed music to user’s emotive state or affect. The focus is on music synthesis and affectivisation in a minimalist context, without direct user input but rather an implicit emotive input, derived from a PAD dimensional model that drives the synthesis and affectivisation process.


Availability of the component: Reference contact is Atta Badii of Intelligent Media Systems and Services Research Laboratory, School of Systems Engineering, University of Reading.
Notes: The component will be made available under the GNU Lesser General Public Licence (Version 3, released 29 June 2007). It may be used, copied, modified, distributed, for any purpose (except commercial) provided existing copyright notices are retained. AMS makes use of Pure Data – a real-time graphical programming environment (BSD Licence),  Java 1.5.0_18 or lower or higher version; MXDUBLIN (LGPL license), Xeq (BSD Licence), Pdj (distributable with copyright and disclaimer notice) and RTC-lib – compositional techniques in pd and Max/MSP (distributable with copyright and disclaimer notice).


Besides functional tests, extensive experimentation of Affective Music Synthesis component in CALLAS is done in Proof-of-Concepts applications for edutainment: Additional reference to the component usage is made in CALLAS Newsletter2 and in the following CALLAS paper:

Last Updated on Friday, 02 July 2010 09:42