15% OFF GIFT VOUCHERS! SHOW ME

Close Notification

Your cart does not contain any items

Social Explainable AI

Communications of NII Shonan Meetings

Katharina Rohlfing Brian Lim Kirsten Thommes Kary Främling

$119.95   $96.25

Hardback

Not in-store but you can order this
How long will it take?

QTY:

English
Springer Nature Switzerland AG
19 March 2026
This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations.

For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners—human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions:

•    Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations. •    Incrementality: XAI should build on the contribution of the involved partners who adapt to each other. •    Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory).

This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content.
Edited by:   , , , ,
Imprint:   Springer Nature Switzerland AG
Country of Publication:   Switzerland
Dimensions:   Height: 235mm,  Width: 155mm, 
ISBN:   9789819652891
ISBN 10:   9819652898
Pages:   615
Publication Date:  
Audience:   Professional and scholarly ,  College/higher education ,  Undergraduate ,  Further / Higher Education
Format:   Hardback
Publisher's Status:   Active

This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations. For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners--human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions: - Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations. - Incrementality: XAI should build on the contribution of the involved partners who adapt to each other. - Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory). This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content. This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations. For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners--human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions: - Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations. - Incrementality: XAI should build on the contribution of the involved partners who adapt to each other. - Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory). This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content.

See Also