Skip to main content
Log in

Neural Computing and Applications - Topical Collection on Explainable Artificial Intelligence for Medical Applications

Aims, Scope and Objective

Explainable Artificial Intelligence (XAI) has emerged as a crucial aspect in ensuring responsible and trustworthy deployment of machine intelligence systems, and its significance in the healthcare domain has been gaining increasing attention in recent years. With the growing use of machine learning algorithms and AI systems in the medical field, XAI can provide valuable and even life-saving solutions. However, this also presents new ethical and legal questions that need to be thoroughly addressed. The need for explainability and interpretability of AI systems is crucial in healthcare, especially in critical applications. Healthcare providers and patients must be able to understand and trust the decision-making processes of AI-driven systems. XAI offers transparency and clear explanations of how these systems make decisions, providing insights into the underlying reasoning and justifications. This helps build trust and confidence in the use of AI in healthcare, as it allows for a better understanding of the outcomes and actions taken by the systems.

In light of these considerations, this topical collection aims to gather innovative research papers, encompassing both theoretical and experimental studies, that focus on the latest advancements in XAI for trustworthy machine intelligence in healthcare. The goal is to promote the development of XAI techniques tailored specifically for healthcare applications and highlight the challenges that need to be overcome to ensure AI's responsible and ethical use in healthcare.

By bringing together cutting-edge research in XAI for healthcare, this topical collection aims to foster a deeper understanding of the importance of explainability and interpretability in the context of AI-driven healthcare systems. It also seeks to promote discussions on the ethical, legal, and societal implications of using AI in healthcare and identify potential solutions and best practices for building trustworthy machine intelligence in healthcare settings. The ultimate goal is to ensure that the deployment of AI in healthcare is responsible, transparent, and accountable, and that it ultimately benefits patients, healthcare providers, and society as a whole.

Topics of interest of the topical collection include, but are not limited to, the following:

  • XAI methods for medical decision-making support systems
  • Interpreting deep learning models in healthcare applications
  • Explanation methods for medical imaging analysis
  • Ethical and legal issues in XAI in healthcare
  • Human-centered XAI design for healthcare applications
  • Evaluating the effectiveness and trustworthiness of XAI in healthcare
  • XAI for personalized medicine
  • Transparent decision-making processes
  • Real-world deployment and evaluation of XAI in healthcare

Guest Editors

Dr. Agostino Forestiero, ICAR-CNR, Rende, Italy, agostino.forestiero@icar.cnr.it (this opens in a new tab)
Dr. Gianni Costa, ICAR-CNR, Rende, Italy, gianni.costa@icar.cnr.it (this opens in a new tab)
Dr. Riccardo Ortale, ICAR-CNR, Rende, Italy, riccardo.ortale@icar.cnr.it (this opens in a new tab)

Manuscript submission deadline extended to: 29th February 2024

Peer Review Process

All the papers will go through peer review,  and will be reviewed by at least two reviewers. A thorough check will be completed, and the guest editor will check any significant similarity between the manuscript under consideration and any published paper or submitted manuscripts of which they are aware. In such case, the article will be directly rejected without proceeding further. Guest editors will make all reasonable effort to receive the reviewer’s comments and recommendation on time.

The submitted papers must provide original research that has not been published nor currently under review by other venues. Previously published conference papers should be clearly identified by the authors at the submission stage and an explanation should be provided about how such papers have been extended to be considered for this special issue (with at least 60% difference from the original works).

Submission Guidelines

Paper submissions for the special issue should strictly follow the submission format and guidelines (https://www.springer.com/journal/521/submission-guidelines (this opens in a new tab)). Each manuscript should not exceed 16 pages in length (inclusive of figures and tables).

Manuscripts must be submitted to the journal online system at https://www.editorialmanager.com/ncaa/default.aspx (this opens in a new tab) or via the 'Submit manuscript' button on the journal homepage.
Authors should select “TC: Explainable Artificial Intelligence for Medical Applications” during the submission step ‘Additional Information’.

Author Resources

Authors are encouraged to submit high-quality, original work that has neither appeared in, nor is under consideration by other journals.  
Springer provides a host of information about publishing in a Springer Journal on our Journal Author Resources page, including  FAQs (this opens in a new tab),  Tutorials (this opens in a new tab)  along with  Help and Support (this opens in a new tab).
Other links include:

Navigation