The challenge aims to bring together researchers from across disciplines related to multimodal conversational analytics. The challenge follows the D-META Challenge organized at the ICMI 2012 in Santa Monica. The D-META challenge had two coupled pillars, method benchmarking and annotation evaluation, and its starting point was set in transparent and publicly available annotations, tasks, and evaluations on shared multimodal data sets. This second challenge continues the same tradition.

The scope of the challenge concerns the multimodal analysis of primary cues and qualities of conversations. It proposes to set up the basis for comparison, analysis, and further improvement of multimodal data annotations and multimodal interactive systems which are important in building various multimodal applications. Such machine learning-based challenges do not exist in the Multimodal Interaction community, and by focusing on the elaboration of algorithms and techniques on shared data sets, we aim to foster the research and development of multimodal interactive systems.

[Download the call for papers here]