The paper “Emotional Machines: Toward Affective Virtual Environments” presents a model that helps to identify the emotions in speech, while mapping them to immersive virtual environments. The paper was presented at the ACM Multimedia conference, one of the world’s leading conferences in multimedia, which took place between October 10 to 14, in Lisbon.
This work, developed by Jorge Forero and Gilberto Bernardes, researchers from INESC TEC’s Centre for Telecommunications and Multimedia (CTM), stemmed from the need to better understand the emotional nature of the discourse and how it relates to the perception of the environment. In addition, the study proposes a model of emotions recognition, which considers both the semantic and acoustic components of speech and defines a strategy that allows mapping the emotions predicted in virtual spaces.
“This model allows disambiguating the emotional component of speech, through the combination of two models of machine learning; there is a model that considers the semantic component to capture what is said and another model that takes into account the acoustic elements, in order to understand how things are said”, explained Jorge Forero.
The results of the research could be useful to virtual assistants, who can benefit from being able to better predict the emotions contained in speech – in order to improve interactions -, or to telemarketers, who can remote control their virtual reality glasses through voice commands.
ACM Multimedia is rated A* according to Australian core ranking, which provides reviews of major conferences in the areas of computing, and is a key event to display scientific studies and innovative industrial products in the area of multimedia.
The researchers mentioned in this news piece are associated with INESC TEC, FCT and UP-FEUP.