Skip to main content
Book cover

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

  • Book
  • © 2019

Overview

  • Assesses the current state of research on Explainable AI (XAI)
  • Provides a snapshot of interpretable AI techniques
  • Reflects the current discourse and provides directions of future development

Part of the book series: Lecture Notes in Computer Science (LNCS, volume 11700)

Part of the book sub series: Lecture Notes in Artificial Intelligence (LNAI)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (22 chapters)

  1. Part I Towards AI Transparency

  2. Part II Methods for Interpreting AI Systems

  3. Part III Explaining the Decisions of AI Systems

  4. Part IV Evaluating Interpretability and Explanations

  5. Part V Applications of Explainable AI

Keywords

About this book

The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.

The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.



Reviews

“This is a very valuable collection for those working in any application of deep learning that looks for the key techniques in XAI at the moment. Readers from other areas in AI or new to XAI can get a glimpse of where cutting-edge research is heading.” (Jose Hernandez-Orallo, Computing Reviews, July 24, 2020)

Editors and Affiliations

  • Fraunhofer Heinrich Hertz Institute, Berlin, Germany

    Wojciech Samek

  • Technische Universität Berlin, Berlin, Germany

    Grégoire Montavon

  • University of Oxford, Oxford, UK

    Andrea Vedaldi

  • Technical University of Denmark, Kgs. Lyngby, Denmark

    Lars Kai Hansen

  • Sekretariat MAR 4-1, Technical University of Berlin, Berlin, Germany

    Klaus-Robert Müller

Bibliographic Information

Publish with us