Multimodal multimodel emotion analysis as linked data
Sánchez-Rada, J. Fernando ; Iglesias, Carlos A. ; Sagha, Hesam ; Schuller, Björn ; Ian D. Wood, Ian D. ; Buitelaar, Paul
Sánchez-Rada, J. Fernando
Iglesias, Carlos A.
Sagha, Hesam
Schuller, Björn
Ian D. Wood, Ian D.
Buitelaar, Paul
Loading...
Repository DOI
Publication Date
2017-10-23
Keywords
Type
Conference Paper
Downloads
Citation
Sánchez-Rada, J. Fernando, Iglesias, Carlos A., Sagha, Hesam, Schuller, Björn, Ian D. Wood, Ian D., & Buitelaar, Paul. (2017). Multimodal multimodel emotion analysis as linked data. Paper presented at the 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), San Antonio, TX, USA, 23-26 October, pp. 111-116, doi: 10.1109/ACIIW.2017.8272599
Abstract
The lack of a standard emotion representation model hinders emotion analysis due to the incompatibility of annotation formats and models from different sources, tools and annotation services. This is also a limiting factor for multimodal analysis, since recognition services from different modalities (audio, video, text) tend to have different representation models (e. g., continuous vs. discrete emotions). This work presents a multi-disciplinary effort to alleviate this problem by formalizing conversion between emotion models. The specific contributions are: i) a semantic representation of emotion conversion; ii) an API proposal for services that perform automatic conversion; iii) a reference implementation of such a service; and iv) validation of the proposal through use cases that integrate different emotion models and service providers.
Funder
Publisher
IEEE
Publisher DOI
Rights
Attribution-NonCommercial-NoDerivs 3.0 Ireland