↓ Skip to main content

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

Overview of attention for book
Cover of 'Explainable AI: Interpreting, Explaining and Visualizing Deep Learning'

Table of Contents

  1. Altmetric Badge
    Book Overview
  2. Altmetric Badge
    Chapter 1 Towards Explainable Artificial Intelligence
  3. Altmetric Badge
    Chapter 2 Transparency: Motivations and Challenges
  4. Altmetric Badge
    Chapter 3 Interpretability in Intelligent Systems – A New Concept?
  5. Altmetric Badge
    Chapter 4 Understanding Neural Networks via Feature Visualization: A Survey
  6. Altmetric Badge
    Chapter 5 Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation
  7. Altmetric Badge
    Chapter 6 Unsupervised Discrete Representation Learning
  8. Altmetric Badge
    Chapter 7 Towards Reverse-Engineering Black-Box Neural Networks
  9. Altmetric Badge
    Chapter 8 Explanations for Attributing Deep Neural Network Predictions
  10. Altmetric Badge
    Chapter 9 Gradient-Based Attribution Methods
  11. Altmetric Badge
    Chapter 10 Layer-Wise Relevance Propagation: An Overview
  12. Altmetric Badge
    Chapter 11 Explaining and Interpreting LSTMs
  13. Altmetric Badge
    Chapter 12 Comparing the Interpretability of Deep Networks via Network Dissection
  14. Altmetric Badge
    Chapter 13 Gradient-Based Vs. Propagation-Based Explanations: An Axiomatic Comparison
  15. Altmetric Badge
    Chapter 14 The (Un)reliability of Saliency Methods
  16. Altmetric Badge
    Chapter 15 Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation
  17. Altmetric Badge
    Chapter 16 Understanding Patch-Based Learning of Video Data by Explaining Predictions
  18. Altmetric Badge
    Chapter 17 Quantum-Chemical Insights from Interpretable Atomistic Neural Networks
  19. Altmetric Badge
    Chapter 18 Interpretable Deep Learning in Drug Discovery
  20. Altmetric Badge
    Chapter 19 NeuralHydrology – Interpreting LSTMs in Hydrology
  21. Altmetric Badge
    Chapter 20 Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI
  22. Altmetric Badge
    Chapter 21 Current Advances in Neural Decoding
  23. Altmetric Badge
    Chapter 22 Software and Application Patterns for Explanation Methods
Attention for Chapter 11: Explaining and Interpreting LSTMs
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (69th percentile)
  • High Attention Score compared to outputs of the same age and source (85th percentile)

Mentioned by

twitter
9 tweeters
reddit
1 Redditor

Citations

dimensions_citation
374 Dimensions

Readers on

mendeley
119 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Chapter title
Explaining and Interpreting LSTMs
Chapter number 11
Book title
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
Published in
arXiv, September 2019
DOI 10.1007/978-3-030-28954-6_11
Book ISBNs
978-3-03-028953-9, 978-3-03-028954-6
Authors

Leila Arras, José Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller, Sepp Hochreiter, Wojciech Samek, Jose A. Arjona-Medina

Twitter Demographics

The data shown below were collected from the profiles of 9 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 119 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 119 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 26 22%
Student > Master 20 17%
Researcher 19 16%
Student > Bachelor 11 9%
Student > Doctoral Student 5 4%
Other 16 13%
Unknown 22 18%
Readers by discipline Count As %
Computer Science 46 39%
Engineering 22 18%
Physics and Astronomy 4 3%
Earth and Planetary Sciences 4 3%
Medicine and Dentistry 3 3%
Other 15 13%
Unknown 25 21%

Attention Score in Context

This research output has an Altmetric Attention Score of 6. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 16 December 2019.
All research outputs
#4,762,976
of 19,204,065 outputs
Outputs from arXiv
#94,340
of 771,801 outputs
Outputs of similar age
#84,480
of 277,000 outputs
Outputs of similar age from arXiv
#3,958
of 28,050 outputs
Altmetric has tracked 19,204,065 research outputs across all sources so far. Compared to these this one has done well and is in the 75th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 771,801 research outputs from this source. They receive a mean Attention Score of 3.9. This one has done well, scoring higher than 87% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 277,000 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 69% of its contemporaries.
We're also able to compare this research output to 28,050 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 85% of its contemporaries.