Machine Learning - Call for Papers: Special Issue on Explainable AI for Secure Applications
Aims and Scope
Over the past decade, the boom of Artificial Intelligence (AI) and Machine Learning (ML) has spurred the pervasive use of deep neural intelligence to enhance the accuracy of smart decision modelling systems in multiple fields. While the main goal of ML models remains correct decisions, eXplainable AI (XAI) has recently emerged as one of the key technologies to provide a good explanation of how ML algorithms or models can achieve a correct decision. On the other hand, alongside traditional cyber-attacks, AI and ML systems are as vulnerable to attacks as any other software systems, with the added complexity that data, as well as explanations and models, can be targeted. This special issue focuses on the challenges and problems in leveraging XAI for Secure Applications. It aims to share and discuss recent advances and future trends of secure and explainable ML to assure stakeholders about the safety and security of ML-based decisions and accelerate the development of XAI approaches for Secure Applications. The topic of the proposed special issue is strictly connected to the emerging view of a Symbiotic AI.
Topics of interest include, but are not limited to:
- Interpretability and Explainability of machine learning and deep learning models
- XAI to increase the accuracy of machine learning and deep learning models
- XAI to increase the transparency of machine learning and deep learning models
- XAI to increase the detection of adversarial machine learning and the robustness of AI models against malicious actions
- XAI to develop novel adversarial machine learning algorithms
- Metrics to evaluate the robustness of XAI algorithms to adversarial attacks
- Exploring vulnerabilities of XAI algorithms
- Novel design and implementations of XAI algorithms that are more robust to adversarial learning
- New datasets, benchmarks and challenges to assess the vulnerability of AI and XAI algorithms
- Examples of innovative applications of XAI algorithms for security and vulnerability analysis of AI models
Schedule
Paper submission deadline: February 25, 2025 (no paper can be submitted before October 15, 2024)
First notification of acceptance: April 15, 2025
Deadline for revised submissions: May 15, 2025
Final notification of acceptance: July 15, 2025
Expected publication date (online): September/October 2025
Guest editors
Annalisa Appice, University of Bari Aldo Moro, Italy, annalisa.appice@uniba.it
Giuseppina Andresini, University of Bari Aldo Moro, Italy, giuseppina.andresini@uniba.it
Przemysław Biecek, Warsaw University of Technology, Poland, przemyslaw.biecek@pw.edu.pl
Christian Wressnegger, Karlsruhe Institute of Technology (KIT), KASTEL Security Research Labs, Germany, christian.wressnegger@kit.edu
Submission procedure
According to the policy of the journal no submission, or substantially overlapping submission, can be published or be under review at another journal or conference at any time during the review process. Papers extending previously published conference papers are acceptable as long as the journal submission provides a significant contribution beyond the conference paper and the overlap is described clearly at the beginning of the journal submission. If you have any questions about whether the overlap with another paper is "substantial," please include in the paper a discussion of the similarities and differences with other papers, including the unique contribution(s) of the Machine Learning submission.
To submit to this issue, authors have to make a journal submission to the Springer Machine Learning journal (https://link.springer.com/journal/10994) and select the type of submission to be for the “Explainable AI for Secure Applications” special issue. It is highly recommended that submitted papers do not exceed 20 pages, including references. Every paper may be accompanied by unlimited appendices.
The papers should be formatted using Springer Nature’s LaTeX template. The journal requires authors to include an information sheet as supplementary material that contains a short summary of their contribution and specifically addresses the following questions:
What is the main claim of the paper? Why is this an important contribution to machine learning literature? [“We are the first to have done X” is not an acceptable answer without stating the importance of X.]What is the evidence you provide to support your claim? Be precise. [“The evidence is provided by experiments and/or theoretical analysis” is not an acceptable answer without a summary of the main results and their implications.]What papers by other authors make the most closely related contributions, and how is your paper related to them?Have you published parts of your paper before, for instance in a conference? If so, give details of your previous paper(s) and a precise statement detailing how your paper provides a significant contribution beyond the previous paper(s).