Artificial Retrieval of Information Assistants – Virtual Agents with Linguistic Understanding, Social skills, and Personalised Aspects
The ARIA-VALUSPA (Artificial Retrieval of Information Assistants – Virtual Agents with Linguistic Understanding, Social skills, and Personalised Aspects) project will create a ground-breaking new framework that will allow easy creation of Artificial Retrieval of Information Assistants (ARIAs) that are capable of holding multi-modal social interactions in challenging and unexpected situations. The system can generate search queries and return the information requested by interacting with humans through virtual characters. These virtual humans will be able to sustain an interaction with a user for some time, and react appropriately to the user’s verbal and non-verbal behavior when presenting the requested information and refining search results. Using audio and video signals as input, both verbal and non-verbal components of human communication are captured. Together with a rich and realistic emotive personality model, a sophisticated dialogue management system decides how to respond to a user’s input, be it a spoken sentence, a head nod, or a smile. The ARIA uses special speech synthesizers to create emotionally colored speech and a fully expressive 3D face to create the chosen response. Backchannelling, indicating that the ARIA understood what the user meant, or returning a smile are but a few of the many ways in which it can employ emotionally colored social signals to improve communication.
Mining and Understanding of multilinguaL contenT for Intelligent Sentiment Enriched coNtext and Social Oriented inteRpretation
MULTISENSOR aims at advancing the research and development of multilingual media analysis technologies. The goal is to enable users (e.g. journalists, entrepreneurs) to attain a comprehensive and exact understanding of topics they are engaged in, not only from their own but from multiple viewpoints. MULTISENSOR will help gather and semantically integrate various local subjective and biased views disseminated via TV, radio, mass media websites and social media. Using sentiment, social and spatiotemporal methods MULTISENSOR will then help to interpret, relate and summarize economic information and news items.
Roadmap for Conversational Interaction Technologies
ROCKIT is a strategic roadmapping project for research and innovation in the area of natural conversational interaction. The primary scientific focus concerns interactive agents which are proactive, multimodal, social, and autonomous. A second focus concerns systems which can extract and exploit rich context and knowledge from heterogenous data sources. The ROCKIT project supports the Conversational Interaction Technology Innovation Alliance (CITIA), which aims at accelerating research and innovation in the area of natural conversational interaction.
A federation of European projects and organisations working on technologies for a multilingual Europe
Cracking the Language Barrier assembles all European research and innovation projects as well as all related community organisations working on or with cross-lingual or multi-lingual technologies, in neighbouring areas or on closely related topics. In this umbrella initiative we collaborate on our joint objective to overcome any kind of language and communication barriers with the help of sophisticated language technologies.