Katalog der Deutschen Nationalbibliothek
Ergebnis der Suche nach: location=onlinefree
|   | |
| Link zu diesem Datensatz | https://d-nb.info/1378795199 | 
| Titel | Explainable Artificial Intelligence : Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025, Proceedings, Part III / edited by Riccardo Guidotti, Ute Schmid, Luca Longo | 
| Person(en) | Guidotti, Riccardo (Herausgeber) Schmid, Ute (Herausgeber) Longo, Luca (Herausgeber) | 
| Organisation(en) | SpringerLink (Online service) (Sonstige) | 
| Ausgabe | 1st ed. 2026 | 
| Verlag | Cham : Springer Nature Switzerland, Imprint: Springer | 
| Zeitliche Einordnung | Erscheinungsdatum: 2026 | 
| Umfang/Format | Online-Ressource, XIX, 448 p. 149 illus., 143 illus. in color. : online resource. | 
| Andere Ausgabe(n) | Printed edition:: ISBN: 978-3-032-08326-5 Printed edition:: ISBN: 978-3-032-08328-9 | 
| Inhalt | Generative AI meets Explainable AI -- Reasoning-Grounded Natural Language Explanations for Language Models -- What's Wrong with Your Synthetic Tabular Data? Using Explainable AI to Evaluate Generative Models -- Explainable Optimization: Leveraging Large Language Models for User-Friendly Explanations -- Large Language Models as Attribution Regularizers for Efficient Model Training -- GraphXAIN: Narratives to Explain Graph Neural Networks -- Intrinsically Interpretable Explainable AI -- MSL: Multiclass Scoring Lists for Interpretable Incremental Decision Making -- Interpretable World Model Imaginations as Deep Reinforcement Learning Explanation -- Unsupervised and Interpretable Detection of User Personalities in Online Social Networks -- An Interpretable Data-Driven Approach for Modeling Toxic Users Via Feature Extraction -- Assessing and Quantifying Perceived Trust in Interpretable Clinical Decision Support -- Benchmarking and XAI Evaluation Measures -- When can you Trust your Explanations? A Robustness Analysis on Feature Importances -- XAIEV – a Framework for the Evaluation of XAI-Algorithms for Image Classification -- From Input to Insight: Probing the Reasoning of Attention-based MIL Models -- Uncovering the Structure of Explanation Quality with Spectral Analysis -- Consolidating Explanation Stability Metrics -- XAI for Representational Alignment -- Reduction of Ocular Artefacts in EEG Signals Based on Interpretation of Variational Autoencoder Latent Space -- Syntax-Guided Metric-Based Class Activation Mapping -- Which Direction to Choose? An Analysis on the Representation Power of Self-Supervised ViTs in Downstream Tasks -- XpertAI: Uncovering Regression Model Strategies for Sub-manifolds -- An XAI-based Analysis of Shortcut Learning in Neural Networks | 
| Persistent Identifier | URN: urn:nbn:de:101:1-2510130417381.296962447521 DOI: 10.1007/978-3-032-08327-2 | 
| URL | https://doi.org/10.1007/978-3-032-08327-2 | 
| ISBN/Einband/Preis | 978-3-032-08327-2 | 
| Sprache(n) | Englisch (eng) | 
| Beziehungen | Communications in Computer and Information Science ; 2578 | 
| Sachgruppe(n) | 370 Erziehung, Schul- und Bildungswesen | 
| Online-Zugriff | Archivobjekt öffnen | 
 Administration
Administration
		







