You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output.
Click here to find out more.
Chapter title |
The (Un)reliability of Saliency Methods
|
---|---|
Chapter number | 14 |
Book title |
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
|
Published by |
Springer, Cham, September 2019
|
DOI | 10.1007/978-3-030-28954-6_14 |
Book ISBNs |
978-3-03-028953-9, 978-3-03-028954-6
|
Authors |
Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim |
Mendeley readers
The data shown below were compiled from readership statistics for 451 Mendeley readers of this research output. Click here to see the associated Mendeley record.
Geographical breakdown
Country | Count | As % |
---|---|---|
Unknown | 451 | 100% |
Demographic breakdown
Readers by professional status | Count | As % |
---|---|---|
Student > Ph. D. Student | 119 | 26% |
Student > Master | 96 | 21% |
Researcher | 64 | 14% |
Student > Bachelor | 43 | 10% |
Student > Doctoral Student | 14 | 3% |
Other | 33 | 7% |
Unknown | 82 | 18% |
Readers by discipline | Count | As % |
---|---|---|
Computer Science | 227 | 50% |
Engineering | 56 | 12% |
Mathematics | 9 | 2% |
Physics and Astronomy | 8 | 2% |
Agricultural and Biological Sciences | 6 | 1% |
Other | 48 | 11% |
Unknown | 97 | 22% |