Topical Collection on Human-aligned Reinforcement Learning for Autonomous Agents and Robots
Aims, Scope and Objective
A slew of advances in the ﬁeld of reinforcement learning (RL) have resulted in signiﬁcant improvements in learning eﬃciency for autonomous agents and robots. However, a vast majority of these works fail to consider human and other contextual factors, which are important to consider from a practical standpoint, in terms of increased trustworthiness in human-robot scenarios. Recently, there has been an increasing degree of focus on auxiliary performance measures such as avoiding unsafe actions during learning, providing human-interpretable solutions and learning context-aware policies in general. Such performance measures improve the practical utility of RL and make it an increasingly attractive option for real-world autonomous robots that are capable of a harmonious coexistence with human users.
The focus of this topical collection is to bring together researchers from the ﬁelds of robotics and RL to discuss and share state-of-the-art methods, challenges and novel solutions pertaining to the issue of incorporating humanrelated aspects into RL agents and robots. Contributions are expected to come mainly from computer scientists and roboticists working in areas related to intrinsically-motivated learning with special human-aligned methods. We hope to provide an opportunity to discuss fundamental current issues to be addressed in order to foster the presence of autonomous agents and robots in real-world scenarios as well as future research directions.
The main topics of interest in the call for submissions are explainability, interactivity, safety, and ethics in social robotics and autonomous agents especially from a reinforcement learning perspective. In this regard, approaches with special interest for this topical collection are (but not limited to):
- Explainability, interpretability, and transparency methods for feature-oriented and goal-driven RL.
- Explainable robotic systems with RL approaches.
- Assisted and interactive RL in human-robot and human-agent scenarios.
- Human-in-the-loop RL and applications.
- RL from demonstrations and imperfect demonstrations.
- Robot and agent learning from multiple human sources.
- Multi-robot systems with human collaboration.
- Safe exploration during learning.
- Ethical reasoning and moral uncertainty.
- Fairness in RL and multi-agent systems.
- Theory of mind based RL frameworks.
- Use of human priors in RL.
Dr. Francisco Cruz (Lead guest editor), School of Information Technology, Deakin University Geelong, Australia, email@example.com
Dr. Thommen George Karimpanal, Applied Artiﬁcial Intelligence Institute (A2I2), Deakin University Geelong, Australia, firstname.lastname@example.org
Dr. Miguel Solis, Facultad de Ingenier´ıa, Universidad Andres Bello Santiago, Chile, email@example.com
Dr. Pablo Barros, Cognitive Architecture for Collaborative Technologies Unit (CONTACT), Italian Institute of Technology (IIT), Genova, Italy, firstname.lastname@example.org
A/Prof. Richard Dazeley, School of Information Technology, Deakin University, Geelong, Australia, email@example.com
Deadline for submissions: December 15, 2021
Deadline for review: February 15, 2022
Decisions: March 15, 2022
Revised manuscript submission: May 15, 2022
Deadline for second review: June 15, 2022
Final decisions: June 30, 2022
The topical collection will be open for worldwide submissions, encouraging contributions especially from computer science, artiﬁcial intelligence, and robotics.
Peer Review Process
All the papers will go through peer review, and will be reviewed by at least three reviewers. A thorough check will be completed, and the guest editors will check any significant similarity between the manuscript under consideration and any published paper or submitted manuscripts of which they are aware. In such case, the article will be directly rejected without proceeding further. Guest editors will make all reasonable effort to receive the reviewer’s comments and recommendation on time.
The submitted papers must provide original research that has not been published nor currently under review by other venues. Previously published conference papers should be clearly identified by the authors at the submission stage and an explanation should be provided about how such papers have been extended to be considered for this special issue (with at least 30% difference from the original works).
Paper submissions for the special issue should strictly follow the submission format and guidelines (https://www.springer.com/journal/521/submission-guidelines). Each manuscript should not exceed 16 pages in length (inclusive of figures and tables).
Manuscripts must be submitted to the journal online system at https://www.editorialmanager.com/ncaa/default.aspx.
Authors should select “TC: Human-aligned RL” during the submission step ‘Additional Information’.