This open access five-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2025, held in Istanbul, Turkey, during July 2025.
The 96 revised full papers presented in these proceedings were carefully reviewed and selected from 224 submissions. The papers are organized in the following topical sections:
Volume I:
Concept-based Explainable AI; human-centered Explainability; explainability, privacy, and fairness in trustworthy AI; and XAI in healthcare.
Volume II:
Rule-based XAI systems & actionable explainable AI; features importance-based XAI; novel post-hoc & ante-hoc XAI approaches; and XAI for scientific discovery.
Volume III:
Generative AI meets explainable AI; Intrinsically interpretable explainable AI; benchmarking and XAI evaluation measures; and XAI for representational alignment.
Volume IV:
XAI in computer vision; counterfactuals in XAI; explainable sequential decision making; and explainable AI in finance & legal frameworks for XAI technologies.
Volume V:
Applications of XAI; human-centered XAI & argumentation; explainable and interactive hybrid decision making; and uncertainty in explainable AI.
Edited by:
Riccardo Guidotti, Ute Schmid, Luca Longo Imprint: Springer Nature Switzerland AG Country of Publication: Switzerland Dimensions:
Height: 235mm,
Width: 155mm,
ISBN:9783032083265 ISBN 10: 3032083265 Series:Communications in Computer and Information Science Pages: 448 Publication Date:12 October 2025 Audience:
College/higher education
,
Professional and scholarly
,
Further / Higher Education
,
Undergraduate
Format:Paperback Publisher's Status: Active
Generative AI meets Explainable AI.- Reasoning-Grounded Natural Language Explanations for Language Models.- What's Wrong with Your Synthetic Tabular Data? Using Explainable AI to Evaluate Generative Models.- Explainable Optimization: Leveraging Large Language Models for User-Friendly Explanations.- Large Language Models as Attribution Regularizers for Efficient Model Training.- GraphXAIN: Narratives to Explain Graph Neural Networks.- Intrinsically Interpretable Explainable AI.- MSL: Multiclass Scoring Lists for Interpretable Incremental Decision Making.- Interpretable World Model Imaginations as Deep Reinforcement Learning Explanation.- Unsupervised and Interpretable Detection of User Personalities in Online Social Networks.- An Interpretable Data-Driven Approach for Modeling Toxic Users Via Feature Extraction.- Assessing and Quantifying Perceived Trust in Interpretable Clinical Decision Support.- Benchmarking and XAI Evaluation Measures.- When can you Trust your Explanations? A Robustness Analysis on Feature Importances.- XAIEV – a Framework for the Evaluation of XAI-Algorithms for Image Classification.- From Input to Insight: Probing the Reasoning of Attention-based MIL Models.- Uncovering the Structure of Explanation Quality with Spectral Analysis.- Consolidating Explanation Stability Metrics.- XAI for Representational Alignment.- Reduction of Ocular Artefacts in EEG Signals Based on Interpretation of Variational Autoencoder Latent Space.- Syntax-Guided Metric-Based Class Activation Mapping.- Which Direction to Choose? An Analysis on the Representation Power of Self-Supervised ViTs in Downstream Tasks.- XpertAI: Uncovering Regression Model Strategies for Sub-manifolds.- An XAI-based Analysis of Shortcut Learning in Neural Networks.