Publication

Multimodal multimodel emotion analysis as linked data

Sánchez-Rada, J. Fernando
Iglesias, Carlos A.
Sagha, Hesam
Schuller, Björn
Wood, Ian
Buitelaar, Paul
Citation
J. F. Sánchez-Rada, C. A. Iglesias, H. Sagha, B. Schuller, I. Wood and P. Buitelaar, "Multimodal multimodel emotion analysis as linked data," 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), San Antonio, TX, 2017, pp. 111-116. doi: 10.1109/ACIIW.2017.8272599
Abstract
The lack of a standard emotion representation model hinders emotion analysis due to the incompatibility of annotation formats and models from different sources, tools and annotation services. This is also a limiting factor for multimodal analysis, since recognition services from different modalities (audio, video, text) tend to have different representation models (e. g., continuous vs. discrete emotions). This work presents a multi-disciplinary effort to alleviate this problem by formalizing conversion between emotion models. The specific contributions are: i) a semantic representation of emotion conversion; ii) an API proposal for services that perform automatic conversion; iii) a reference implementation of such a service; and iv) validation of the proposal through use cases that integrate different emotion models and service providers.
Publisher
IEEE
Publisher DOI
Rights
Attribution-NonCommercial-NoDerivs 3.0 Ireland