Collection 

Multimodal learning and applications

Submission status
Closed
Submission deadline
Digital content is nowadays available from multiple, heterogeneous sources across a wide range of sensing modalities. Learning from multimodal sources offers the unprecedented possibility of capturing correspondences between modalities and gaining in-depth insights in a variety of domains. However, the management, integration, and interpretation of this multimodal data, which comprises inter-modality and cross-modality information, pose multiple challenges on traditional data fusion and learning methods. This Collection aims to showcase the current progress and latest solutions in multimodal learning, and encourages practical and interdisciplinary research towards the definition of systems that can integrate multiple modalities for real-world solutions.
Walls of digital screens in blue

Editors

  • Hairong Qi

    University of Tennessee - Knoxville, USA

  • Gemine Vivone

    National Research Council - Institute of Methodologies for Environmental Analysis, CNR-IMAA, Tito Scalo, Italy

  • Stefanos Vrochidis

    Information Technologies Institute, Centre for Research and Technology Hellas, Greece

  • Shuihua Wang

    University of Leicester, UK

  • Wenwu Wang

    University of Surrey, UK