Katalog der Deutschen Nationalbibliothek
Ergebnis der Suche nach: cod="ra"
![]() |
|
Link zu diesem Datensatz | https://d-nb.info/1378794605 |
Titel | Explainable Artificial Intelligence : Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025, Proceedings, Part I / edited by Riccardo Guidotti, Ute Schmid, Luca Longo |
Person(en) |
Guidotti, Riccardo (Herausgeber) Schmid, Ute (Herausgeber) Longo, Luca (Herausgeber) |
Organisation(en) | SpringerLink (Online service) (Sonstige) |
Ausgabe | 1st ed. 2026 |
Verlag | Cham : Springer Nature Switzerland, Imprint: Springer |
Zeitliche Einordnung | Erscheinungsdatum: 2026 |
Umfang/Format | Online-Ressource, XIX, 450 p. 149 illus., 132 illus. in color. : online resource. |
Andere Ausgabe(n) |
Printed edition:: ISBN: 978-3-032-08316-6 Printed edition:: ISBN: 978-3-032-08318-0 |
Inhalt | Concept-based Explainable AI -- Global Properties from Local Explanations with Concept Explanation Clusters -- From Colors to Classes: Emergence of Concepts in Vision Transformers -- V-CEM: Bridging Performance and Intervenability in Concept-based Models -- Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations -- Concept Extraction for Time Series with ECLAD-ts -- Human-Centered Explainability -- A Nexus of Explainability and Anthropomorphism in AI-Chatbots -- Comparative Explanations: Explanation Guided Decision Making for Human-in-the-Loop Preference Selection -- Generating Rationales Based on Human Explanations for Constrained Optimization -- Algorithmic Knowability: a unified approach to Explanations in the AI Act -- Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities -- Explainability, Privacy, and Fairness in Trustworthy AI -- Too Sure for Trust. The Paradoxical Effect of Calibrated Confidence in case of Uncalibrated Trust in Hybrid Decision Making -- The Impact of Concept Explanations and Interventions on Human-machine Collaboration.-Leaking LoRA: An Evaluation of Password Leaks and Knowledge Storage in Large Language Models -- Exploring Explainability in Federated Learning: A Comparative Study on Brain Age Prediction -- The Dynamics of Trust in XAI: Assessing Perceived and Demonstrated Trust Across Interaction Modes and Risk Treatments -- XAI in Healthcare -- Systematic Benchmarking of Local and Global Explainable AI Methods for Tabular Healthcare Data -- A Combination of Integrated Gradients and SRFAMap for Explaining Neural Networks Trained with High-order Statistical Radiomic Features -- FAIR-MED: Bias Detection and Fairness Evaluation in Healthcare Focused XAI -- Weakly Supervised Pixel-Level Annotation with Visual Interpretability -- Assessing the Value of Explainable Artificial Intelligence for Magnetic Resonance Imaging |
Persistent Identifier |
URN: urn:nbn:de:101:1-2510130405557.523409268953 DOI: 10.1007/978-3-032-08317-3 |
URL | https://doi.org/10.1007/978-3-032-08317-3 |
ISBN/Einband/Preis | 978-3-032-08317-3 |
Sprache(n) | Englisch (eng) |
Beziehungen | Communications in Computer and Information Science ; 2576 |
Sachgruppe(n) | 004 Informatik |
Online-Zugriff | Archivobjekt öffnen |
