Collection 

Multimodal learning and applications

Submission status
Closed
Submission deadline
Digital content is nowadays available from multiple, heterogeneous sources across a wide range of sensing modalities. Learning from multimodal sources offers the unprecedented possibility of capturing correspondences between modalities and gaining in-depth insights in a variety of domains. However, the management, integration, and interpretation of this multimodal data, which comprises inter-modality and cross-modality information, pose multiple challenges on traditional data fusion and learning methods. This Collection aims to showcase the current progress and latest solutions in multimodal learning, and encourages practical and interdisciplinary research towards the definition of systems that can integrate multiple modalities for real-world solutions.

Editors

  • Shuihua Wang, PhD

    University of Leicester, UK

  • Wenwu Wang, PhD

    University of Surrey, UK

  • Hairong Qi, PhD

    University of Tennessee - Knoxville, USA

  • Stefanos Vrochidis, PhD

    Information Technologies Institute, Centre for Research and Technology Hellas, Greece

  • Gemine Vivone, PhD

    National Research Council - Institute of Methodologies for Environmental Analysis, CNR-IMAA, Italy