This open access five-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2025, held in Istanbul, Turkey, during July 2025.
The 96 revised full papers presented in these proceedings were carefully reviewed and selected from 224 submissions. The papers are organized in the following topical sections:
Volume I:
Concept-based Explainable AI; human-centered Explainability; explainability, privacy, and fairness in trustworthy AI; and XAI in healthcare.
Volume II:
Rule-based XAI systems & actionable explainable AI; features importance-based XAI; novel post-hoc & ante-hoc XAI approaches; and XAI for scientific discovery.
Volume III:
Generative AI meets explainable AI; Intrinsically interpretable explainable AI; benchmarking and XAI evaluation measures; and XAI for representational alignment.
Volume IV:
XAI in computer vision; counterfactuals in XAI; explainable sequential decision making; and explainable AI in finance & legal frameworks for XAI technologies.
Volume V:
Applications of XAI; human-centered XAI & argumentation; explainable and interactive hybrid decision making; and uncertainty in explainable AI.
Edited by:
Riccardo Guidotti,
Ute Schmid,
Luca Longo
Imprint: Springer Nature Switzerland AG
Country of Publication: Switzerland
Dimensions:
Height: 235mm,
Width: 155mm,
ISBN: 9783032083326
ISBN 10: 303208332X
Series: Communications in Computer and Information Science
Pages: 414
Publication Date: 19 October 2025
Audience:
College/higher education
,
Professional and scholarly
,
Further / Higher Education
,
Undergraduate
Format: Paperback
Publisher's Status: Active
Applications of XAI.- Global Explanations of Expected Goal Models in Football.- Comprehensive Explanations Using Natural Language Queries.- A Human-in-the-Loop Approach to Learning Social Norms as Defeasible Logical Constraints.- A Cautionary Tale About ''Neutrally'' Informative AI Tools Ahead of the 2025 Federal Elections in Germany.- Human-Centered XAI & Argumentation.- Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and their Comparison with Decision Trees.- Mind the XAI Gap: A Human-Centered LLM Framework for Democratizing Explainable AI.- Explanations for Medical Diagnosis Predictions Based on Argumentation Schemes.- Spectral Occlusion - Attribution Beyond Spatial Relevance Heatmaps.- Non-experts' Trust in XAI is Unreasonably High.- Explainable and Interactive Hybrid Decision Making.- Exploring Annotator Disagreement in Sexism Detection: Insights from Explainable AI.- Can You Regulate Your Emotions? An Empirical Investigation of the Influence of AI Explanations and Emotion Regulation on Human Decision-Making Factors.- When Bias Backfires: The Modulatory Role of Counterfactual Explanations on the Adoption of Algorithmic Bias in XAI-Supported Human Decision-Making.- Understanding Disagreement Between Humans and Machines in XAI: Robustness, Fidelity, and Region-Based Explanations in Automatic Neonatal Pain Assessment.- On Combining Embeddings, Ontology and LLM to Retrieve Semantically Similar Quranic Verses and Generate their Explanations.- Uncertainty in Explainable AI.- Improving Counterfactual Truthfulness for Molecular Property Prediction through Uncertainty Quantification.- Fast Calibrated Explanations: Efficient and Uncertainty-Aware Explanations for Machine Learning Models.- Explaining Low Perception Model Competency with High-Competency Counterfactuals.- Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators.