Skip to main content
  • Book
  • © 2019

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

  • Assesses the current state of research on Explainable AI (XAI)
  • Provides a snapshot of interpretable AI techniques
  • Reflects the current discourse and provides directions of future development

Part of the book series: Lecture Notes in Computer Science (LNCS, volume 11700)

Part of the book sub series: Lecture Notes in Artificial Intelligence (LNAI)

Buy it now

Buying options

eBook USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

This is a preview of subscription content, log in via an institution to check for access.

Table of contents (22 chapters)

  1. Front Matter

    Pages i-xi
  2. Part I Towards AI Transparency

    1. Front Matter

      Pages 1-3
    2. Towards Explainable Artificial Intelligence

      • Wojciech Samek, Klaus-Robert Müller
      Pages 5-22
    3. Transparency: Motivations and Challenges

      • Adrian Weller
      Pages 23-40
    4. Interpretability in Intelligent Systems – A New Concept?

      • Lars Kai Hansen, Laura Rieger
      Pages 41-49
  3. Part II Methods for Interpreting AI Systems

    1. Front Matter

      Pages 51-53
    2. Understanding Neural Networks via Feature Visualization: A Survey

      • Anh Nguyen, Jason Yosinski, Jeff Clune
      Pages 55-76
    3. Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation

      • Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee
      Pages 77-95
    4. Unsupervised Discrete Representation Learning

      • Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama
      Pages 97-119
    5. Towards Reverse-Engineering Black-Box Neural Networks

      • Seong Joon Oh, Bernt Schiele, Mario Fritz
      Pages 121-144
  4. Part III Explaining the Decisions of AI Systems

    1. Front Matter

      Pages 145-147
    2. Explanations for Attributing Deep Neural Network Predictions

      • Ruth Fong, Andrea Vedaldi
      Pages 149-167
    3. Gradient-Based Attribution Methods

      • Marco Ancona, Enea Ceolini, Cengiz Öztireli, Markus Gross
      Pages 169-191
    4. Layer-Wise Relevance Propagation: An Overview

      • Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller
      Pages 193-209
    5. Explaining and Interpreting LSTMs

      • Leila Arras, José Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller et al.
      Pages 211-238
  5. Part IV Evaluating Interpretability and Explanations

    1. Front Matter

      Pages 239-241
    2. Comparing the Interpretability of Deep Networks via Network Dissection

      • Bolei Zhou, David Bau, Aude Oliva, Antonio Torralba
      Pages 243-252
    3. The (Un)reliability of Saliency Methods

      • Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne et al.
      Pages 267-280
  6. Part V Applications of Explainable AI

    1. Front Matter

      Pages 281-284

About this book

The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.

The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.



Reviews

“This is a very valuable collection for those working in any application of deep learning that looks for the key techniques in XAI at the moment. Readers from other areas in AI or new to XAI can get a glimpse of where cutting-edge research is heading.” (Jose Hernandez-Orallo, Computing Reviews, July 24, 2020)

Editors and Affiliations

  • Fraunhofer Heinrich Hertz Institute, Berlin, Germany

    Wojciech Samek

  • Technische Universität Berlin, Berlin, Germany

    Grégoire Montavon

  • University of Oxford, Oxford, UK

    Andrea Vedaldi

  • Technical University of Denmark, Kgs. Lyngby, Denmark

    Lars Kai Hansen

  • Sekretariat MAR 4-1, Technical University of Berlin, Berlin, Germany

    Klaus-Robert Müller

Bibliographic Information

Buy it now

Buying options

eBook USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access