↓ Skip to main content

Explainable Artificial Intelligence

Overview of attention for book
Cover of 'Explainable Artificial Intelligence'

Table of Contents

  1. Altmetric Badge
    Book Overview
  2. Altmetric Badge
    Chapter 1 Towards the Visualization of Aggregated Class Activation Maps to Analyse the Global Contribution of Class Features
  3. Altmetric Badge
    Chapter 2 Natural Example-Based Explainability: A Survey
  4. Altmetric Badge
    Chapter 3 Explainable Artificial Intelligence in Education: A Comprehensive Review
  5. Altmetric Badge
    Chapter 4 Contrastive Visual Explanations for Reinforcement Learning via Counterfactual Rewards
  6. Altmetric Badge
    Chapter 5 Compare-xAI: Toward Unifying Functional Testing Methods for Post-hoc XAI Algorithms into a Multi-dimensional Benchmark
  7. Altmetric Badge
    Chapter 6 Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal
  8. Altmetric Badge
    Chapter 7 A Novel Architecture for Robust Explainable AI Approaches in Critical Object Detection Scenarios Based on Bayesian Neural Networks
  9. Altmetric Badge
    Chapter 8 Explaining Black-Boxes in Federated Learning
  10. Altmetric Badge
    Chapter 9 PERFEX: Classifier Performance Explanations for Trustworthy AI Systems
  11. Altmetric Badge
    Chapter 10 The Duet of Representations and How Explanations Exacerbate It
  12. Altmetric Badge
    Chapter 11 Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social Media
  13. Altmetric Badge
    Chapter 12 Human-Computer Interaction and Explainability: Intersection and Terminology
  14. Altmetric Badge
    Chapter 13 Explaining Deep Reinforcement Learning-Based Methods for Control of Building HVAC Systems
  15. Altmetric Badge
    Chapter 14 Handling Missing Values in Local Post-hoc Explainability
  16. Altmetric Badge
    Chapter 15 Necessary and Sufficient Explanations of Multi-Criteria Decision Aiding Models, with and Without Interacting Criteria
  17. Altmetric Badge
    Chapter 16 XInsight: Revealing Model Insights for GNNs with Flow-Based Explanations
  18. Altmetric Badge
    Chapter 17 What Will Make Misinformation Spread: An XAI Perspective
  19. Altmetric Badge
    Chapter 18 MEGAN: Multi-explanation Graph Attention Network
  20. Altmetric Badge
    Chapter 19 Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies
  21. Altmetric Badge
    Chapter 20 Evaluating Link Prediction Explanations for Graph Neural Networks
  22. Altmetric Badge
    Chapter 21 Propaganda Detection Robustness Through Adversarial Attacks Driven by eXplainable AI
  23. Altmetric Badge
    Chapter 22 Explainable Automated Anomaly Recognition in Failure Analysis: is Deep Learning Doing it Correctly?
  24. Altmetric Badge
    Chapter 23 DExT: Detector Explanation Toolkit
  25. Altmetric Badge
    Chapter 24 Unveiling Black-Boxes: Explainable Deep Learning Models for Patent Classification
  26. Altmetric Badge
    Chapter 25 HOLMES: HOLonym-MEronym Based Semantic Inspection for Convolutional Image Classifiers
  27. Altmetric Badge
    Chapter 26 Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
  28. Altmetric Badge
    Chapter 27 Beyond One-Hot-Encoding: Injecting Semantics to Drive Image Classifiers
  29. Altmetric Badge
    Chapter 28 Finding Spurious Correlations with Function-Semantic Contrast Analysis
  30. Altmetric Badge
    Chapter 29 Explaining Search Result Stances to Opinionated People
  31. Altmetric Badge
    Chapter 30 A Co-design Study for Multi-stakeholder Job Recommender System Explanations
  32. Altmetric Badge
    Chapter 31 Explaining Socio-Demographic and Behavioral Patterns of Vaccination Against the Swine Flu (H1N1) Pandemic
  33. Altmetric Badge
    Chapter 32 Semantic Meaningfulness: Evaluating Counterfactual Approaches for Real-World Plausibility and Feasibility
Attention for Chapter 25: HOLMES: HOLonym-MEronym Based Semantic Inspection for Convolutional Image Classifiers
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
1 Dimensions

Readers on

mendeley
1 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Chapter title
HOLMES: HOLonym-MEronym Based Semantic Inspection for Convolutional Image Classifiers
Chapter number 25
Book title
Explainable Artificial Intelligence
Published in
arXiv, January 2023
DOI 10.1007/978-3-031-44067-0_25
Book ISBNs
978-3-03-144066-3, 978-3-03-144067-0
Authors

Dibitonto, Francesco, Garcea, Fabio, Panisson, André, Perotti, Alan, Morra, Lia, Francesco Dibitonto, Fabio Garcea, André Panisson, Alan Perotti, Lia Morra

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 1 Mendeley reader of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 1 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 1 100%
Readers by discipline Count As %
Psychology 1 100%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 14 March 2024.
All research outputs
#22,859,183
of 25,489,496 outputs
Outputs from arXiv
#677,834
of 923,375 outputs
Outputs of similar age
#407,348
of 476,690 outputs
Outputs of similar age from arXiv
#19,865
of 29,596 outputs
Altmetric has tracked 25,489,496 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 923,375 research outputs from this source. They receive a mean Attention Score of 4.3. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 476,690 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 29,596 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.