Call for Papers: Special Issue on Safe and Fair Machine Learning

In recent years, safety and fairness have emerged as increasingly relevant topics in machine learning
(ML), mainly because ML has also become an important and inseparable part of our daily lives. ML is everywhere: traffic prediction, recommendation systems, marketing analysis, medical diagnosis, autonomous driving, robot control, decision-making support for businesses and even governments make use of ML. ML systems have produced a disruptive change in society, enabling the automation of many tasks by leveraging the huge amount of information available in the Big Data era. For some applications, ML systems have shown impressive capabilities, even outperforming humans.

Despite these achievements, the presence of ML in many real-world applications has brought new
challenges related to the trustworthiness of these systems. The potential of these algorithms to cause undesirable behaviors is a growing concern in the ML community, especially when they are integrated in real-world safety-critical systems. Deploying ML in the real world may have dangerous consequences: it has been shown that ML could delay medical diagnoses, cause environmental damage or harm to humans, produce racist, sexist, and other discriminatory behaviours, and even provoke traffic accidents.

Moreover, learning algorithms are vulnerable and can be compromised by smart attackers, who can gain a significant advantage by exploiting the weaknesses of ML systems. In the light of these concerns, one key question raises: can we avoid undesirable behaviors and design ML algorithms that behave safely and fairly?

This special issue aims to bring together papers outlining the safety and fairness implications of the use of ML in real-world systems, papers proposing methods to detect, prevent and/or alleviate undesired behaviors that ML-based systems might exhibit, papers analyzing the vulnerability of ML systems to adversarial attacks and the possible defense mechanisms, and, more generally, any paper that stimulates progress on topics related to safe and fair ML.

Topics of Interest
Contributions are sought in (but are not limited to) the following topics:

  • Fairness and/or safety in machine learning
  • Safe reinforcement learning
  • Safe robot control
  • Bias in machine learning
  • Adversarial examples in machine learning and defense mechanisms
  • Applications of transparency to safety and fairness in machine learning
  • Verification techniques to ensure safety and robustness
  • Safety and interpretability by having a human in the loop
  • Backdoors in machine learning
  • Transparency in machine learning
  • Robust and risk-sensitive decision making

Contributions must contain new, unpublished, original and fundamental work related to the Machine Learning Journal’s mission. All submissions will be reviewed using rigorous scientific criteria whereby the novelty of the contribution will be crucial.

Submission Instructions
Submit manuscripts to:​. Select “SI: ​Safe and Fair Machine Learning​” as the
article type. Papers must be prepared in accordance with the Journal guidelines:

Key Dates
Continuous submission/review process

Submission deadline​: ​28 February 2022
First decision: ​28 April 2022
Revision and resubmission deadline: 28 May 2022
Paper acceptance: 28 July 2022
Camera-ready: 15 August 2022

Guest Editors
Dana Drachsler Cohen (Technion, Israel Institute of Technology)
Javier García (Universidad Carlos III de Madrid)
Mohammad Ghavamzadeh (Google Research)
Marek Petrik (University of New Hampshire)
Philip S. Thomas (University of Massachusetts)