News

  • 04 Feb 2019: Updates review process. Check author page for more details.
  • 29 Jan 2019: Submission portal and instructions announed. More details in author page.
  • 23 Jan 2019: Paper submission dates and Keynote speaker announced.
  • 11 Jan 2019: Cross Modal Learning workshop website online.
  • Overview

    The workshop crossmodal learning and application, puts the emphasis more on how different modalities semantically interact with each other, rather than simply learning with information integration from multiple modalities and retrieving them. The goal of this workshop is to address questions such as following

    • how to handle noise or imbalance in data and a small number of labelled samples for cross-modality data?
    • How to efficiently transfer knowledge from one modality with abundant supervision information to another modality with less or even no knowledge?
    • How to translate data across different modalities, e.g. the generation of motion-sensor data from visual input or visually indicated sound?
    • How to align cross-modal data by using appropriate alignment functions and similarity measurements
    • How to better utilise different modalities in an optimal way to satisfy requirements,which are sometimes even contradicting each other, like business demand, cost constraints and user satisfaction?
    • The sources of the multi modal data are not restricted in any way, which could be from users, devices, machines, systems and distributed environments.
    • This workshop does not only attempt to leverage knowledge across modalities but also motivate their application in industry and society.