Topical Collection on Human-in-the-loop Machine Learning and its Applications
Aims, Scope and Objective
Human-in-the-Loop (HIL) means including human feedback into the training loop of the machine learning models in order to facilitate the following requirements: 1) to improve the quality of training and reduce/prevent the error of the model. When the testing error is larger than a certain threshold, the HIL learning model is able to obtain the new data-points from the users in an interactive way. In some situations, a large error produced by the model should be avoided. For instance, reinforcement learning alone is not sufficient to achieve safety if there exists an exploration policy in robot manipulation, by which some unexpected actions may be generated. In such scenarios, the data-points from the human guidance are crucial during both robot’s safe execution as well as model optimization. 2) to incorporate the human user labelling to improve the pre-trained models. During the training of the state-of-the-art models, the quality of the training data-sets is extremely important. One solution to actively incorporate more data is optimizing the models by including the human users’ feedback (e.g. rewards in RL) or new datapoints (e.g. supervised learning) to adapt the pre-trained models in different environments.
In the aforementioned requirements, humans are involved in the training process of the algorithms by continuously optimizing the model’s parameters, feeding the data or even adjusting the model itself by meta-learning. From the perspective of algorithm design, a key problem to design a proper training with a human in the loop is how to leverage both active learning from a human and the optimization of the models. In other words, how can we design a proper query strategy depending on different applications and scenarios? When properly implemented, the HIL is suitable to be applied in real-world applications where the data is sparse. The active learning mechanism built in the model can be helpful which could seek the human’s help in a form of supervised or reinforcement learning. In this way, proper designs of interactive displays, machines and robots could be of help to obtain the human’s inputs. They are related to designs of HCI, UX/UI, etc, and related to how we can efficiently utilize the human expertise to reduce the exponential search space. In this special session, the designs and experiments can also be discussed to evaluate the effectiveness of human-in-the-loop applications.
This special session will also offer the opportunity for researchers and practitioners in the diverse fields of robotics where human reinforcement feedback would have a positive impact on the training processes. The inclusion of HIL would allow robots and machine learning models to use both internal and external feedback to speed up the learning process and also improve its performance. In many ways this could allow the models to learn through their own self-reflection as well as the external input from a human.
Specifically, as a following-up journal publication of the special session in HIL machine learning in IEEE SMC 2020, the extended versions of the accepted paper are mostly welcomed.
Topics of interest include, but are not limited to:
- Human Guided Reinforcement Learning
- Human-robot Collaboration
- Human-robot Social Interaction
- Dialogue Systems with Human-in-the-loop
- Interpretable Machine Learning with Human-in-the-loop
- Active Learning and Continuous Learning
- Learning by Demonstration
- Human Factors in HCI/HRI
Dr. Joni Zhong (Lead Guest Editor), Nottingham Trent University, UK, firstname.lastname@example.org
Dr. Mark Elshaw, Coventry University, UK, email@example.com
Dr. Yanan Li, Sussex University, UK, firstname.lastname@example.org
Prof. Dr. Stefan Wermter, University of Hamburg, Germany, email@example.com
Prof. Xiaofeng Liu, Hohai University, China, firstname.lastname@example.org
Deadline for submissions: 31st December 2020
Deadline for review: 28th February 2021
Decisions: 20th March 2021
Deadline for revised version by authors: 20th April 2021
Deadline for 2nd review: 10th May 2021
Final decisions: 20th May 2021
Peer Review Process
The five guest editors will oversee the general quality and do the first screening of the submissions. Although most of the submission will come from the extended versions of the accepted conference papers, we will check the topic relevance and the quality at the first stage.
At the second stage, at least 8-10 reviewers will be appointed to do the peer-review. We will ensure one paper will receive three reviews. They will have 8 weeks to finish a review for each paper.
Each manuscript should not exceed 16 pages in length (inclusive of figures and tables).
Paper submissions for the special issue should follow the submission format and guidelines (https://www.springer.com/journal/521/submission-guidelines).
Authors should select ‘SI: Human-in-the-loop Machine Learning' during the submission step 'Additional Information'.
The submitted papers must provide original research that has not been published nor currently under review by other venues. Previously published conference papers should be clearly identified by the authors at the submission stage and an explanation should be provided about how such papers have been extended. At least 30% of new content is expected.