Multimodality, Augmented Reality and Affective Computing configure a broad research area involving many disciplines. Among many research initiatives, projects recalled below are representative of investigations in these areas and particularly close to CALLAS:
AMIstudying computer enhanced multi-modal interaction in the context of meetings.
focusing on engaging users with new media applications using context aware applications and mobile AR.
about creating environments in which computers serve humans who focus on interacting with other humans as opposed to having to attend to and being preoccupied with the machines themselves.
aiming to change the way we think about the relationships of people to computers and the Internet by developing a virtual conversational 'Companion', as an agent or 'presence' that stays with the user for long periods of time, developing a relationship and 'knowing' its owners preferences and wishes.
making a co-ordinated effort on establishing a shared understanding of the issues involved in multimodality at large
proposing a multi-faceted theory of artificial long-term companions (including memory, emotions, cognition, communication, learning, etc.) and experimenting the theory and the technology in real social environments.
developing an open source platform for the rapid development of multimodal interactive systems as a central tool for an iterative user-centered design process.
integrating a development platform, based on an existing games engine, with tools and mechanisms for interoperable binding, inclusion and access of existing, emerging and new multi-modal I/O devices in the context of serious games.
SAFIRA
supporting affective speech modules supporting interactions in real-time applications
introducing the concept of multimodal interaction (also non-verbal, or implicit or based on emotional cues) of users with search engines