Call for Papers: Special Issue on Large-scale Pre-training: Data, Models, and Fine-tuning

Recent years have witnessed an explosion of interest in and a fast development of large-scale pretrained models with the explosion of massive data and model parameters. Large-scale pretrained models have achieved milestones and exemplary performance on a broad range of practical problems, including not only computer science areas like natural language processing, computer vision, and recommender systems, but also other research areas like biology, meteorology, art, etc. Different from early non-neural models and small models that heavily rely on hand-crafted features, statistical methods, and accurate human annotations, neural models can automatically learn low-level distributed representations and high-level latent semantic information from data. As deep neural models tend to overfit and have poor generalization with huge numbers of parameters, massive efforts have been devoted to exploiting how to pre-train large-scale models on large-scale data. As large-scale human annotations are labor-consuming and time-costing, it is impractical for large-scale pre-training in a fully-supervised manner. Considering this issue, the AI community has made recent efforts on self-supervised learning algorithms and theories, large-scale pre-training paradigms according to data format, large-scale model architecture designs, and fine-tuning pre-trained models for downstream applications.

This special issue seeks original and novel contributions towards advancing the theory, architecture, and algorithmic design for large-scale pre-trained models as well as downstream applications. The special issue will provide a timely collection of recent advances to benefit the researchers and practitioners working in the broad research field of deep learning, natural language processing, computer vision, and machine intelligence. Topics of interest include (but are not limited to):

  • Language Pre-training
  • Visual Pre-training
  • Multi-modal Pre-training
  • Multi-lingual Pre-training
  • Large-scale Pre-training Theories
  • Large-scale Pre-training Algorithms and Architectures
  • Efficient Large-scale Pre-training
  • Fine-tuning Pre-trained Models
  • Pre-training Applications
  • Survey of Large-scale Pre-training

Once your manuscript is finished, please submit it online: https://mc03.manuscriptcentral.com/mir.
During the submission “Step 6 Details & Comments: Special Issue and Special Section”, please choose “Special Issue on Large-scale Pre-training: Data, Models, and Fine-tuning”.

Submission Deadline: 30 June 2022

Guest Editors
Prof. Ji-Rong Wen, Renmin University of China, China (jrwen@ruc.edu.cn)
Prof. Zi Huang, The University of Queensland, Australia (huang@itee.uq.edu.au)
Prof. Hanwang Zhang, Nanyang Technological University, Singapore (hanwangzhang@ntu.edu.sg)