Call for Papers: Special Issue on Weakly Supervised Representation Learning

Modern machine learning is migrating to the era of complex models (e.g., deep neural networks), which emphasizes data representation highly. This learning paradigm is known as representation learning. Specifically, via deep neural networks, learned representations often result in much better performance than those can be obtained with hand-designed representations. It is noted that representation learning typically requires a plethora of well-annotated data. Giant companies can afford to collect well-annotated data, while for startups or non-profit organizations, such data is barely acquirable due to the cost of labeling data or the intrinsic scarcity in the given domain. These practical issues motivate researchers in machine learning and related fields to investigate and pay attention to weakly supervised representation learning (WSRL), since WSRL does not require such a huge amount of annotated data. We define WSRL as the collection of representation learning problem settings and algorithms that share the same goals as supervised representation learning but can only access to less supervised information than supervised representation learning. In this special issue, we discuss both theoretical and applied aspects of WSRL, which includes but not limited to the following topics:

  • Algorithm and theory of incomplete supervision, e.g., semi-supervised representation learning, active representation learning and positive-unlabeled representation learning;
  • Algorithm and theory of inexact supervision, e.g., multi-instance representation learning and complementary representation learning;
  • Algorithm and theory of inaccurate supervision, e.g., crowdsourced representation learning, label-noise representation learning and partial-/superset-label representation learning;
  • Algorithm and theory of cross-domain supervision, e.g., zero-/one-/few-shot representation learning, transferable representation learning and multi-task representation leaning;
  • Algorithm and theory of imperfect demonstration, e.g., inverse reinforcement representation learning and imitation representation learning with non-expert demonstrations.
  • Applications: 1) weakly-supervised object detection (computer vision); 2) weakly-supervised sequence modeling (natural language processing); 3) weakly-supervised cross-media retrieval (information retrieval); weakly-supervised medical image segmentation (healthcare analysis).

IMPORTANT DATES

November 1st, 2021: Paper Submission.
February 25th, 2022: First Decision.
April 20th, 2022: Revision.
June 25th, 2022: Final Decision.
July 15th, 2022: Camera-ready.
August 2022: Publication.

GUEST EDITORS

Bo Han, Hong Kong Baptist University, HKSAR, China.
Tongliang Liu, University of Sydney, Australia.
Quanming Yao, Tsinghua University, China.
Mingming Gong, University of Melbourne, Australia.
Gang Niu, RIKEN, Japan.
Ivor W. Tsang, University of Technology Sydney, Australia.
Masashi Sugiyama, RIKEN / University of Tokyo, Japan.

SUBMISSION INSTRUCTIONS

Submit manuscripts to: http://mach.edmgr.com. Select “SI: Weakly Supervised Representation Learning” as the article type. Early submissions are welcome.

Papers must be prepared in accordance with the Journal guidelines: https://www.springer.com/journal/10994/submission-guidelines Authors are encouraged to submit high-quality, original work that has neither appeared in, nor is under consideration by other journals. All papers will be reviewed following standard reviewing procedures for the Journal.