Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
Springer International Publishing
Chapter title |
The (Un)reliability of Saliency Methods
|
---|---|
Chapter number | 14 |
Book title |
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
|
Published by |
Springer, Cham, September 2019
|
DOI | 10.1007/978-3-030-28954-6_14 |
Book ISBNs |
978-3-03-028953-9, 978-3-03-028954-6
|
Authors |
Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim |
Country | Count | As % |
---|---|---|
Unknown | 487 | 100% |
Readers by professional status | Count | As % |
---|---|---|
Student > Ph. D. Student | 124 | 25% |
Student > Master | 94 | 19% |
Researcher | 67 | 14% |
Student > Bachelor | 44 | 9% |
Other | 16 | 3% |
Other | 35 | 7% |
Unknown | 107 | 22% |
Readers by discipline | Count | As % |
---|---|---|
Computer Science | 231 | 47% |
Engineering | 58 | 12% |
Mathematics | 10 | 2% |
Physics and Astronomy | 8 | 2% |
Medicine and Dentistry | 7 | 1% |
Other | 50 | 10% |
Unknown | 123 | 25% |