Katalog der Deutschen Nationalbibliothek

Neuigkeiten Der Multimedia-Zeitschriftenlesesaal und der Kartenlesesaal in Leipzig sind vom 30.10. bis 03.11. geschlossen. // The multimedia/periodical reading room and the map reading room in Leipzig are closed from 30.10 to 03.11.
 
Neuigkeiten Leipzig: Freitag, 31. Oktober 2025: Die Deutsche Nationalbibliothek in Leipzig ist geschlossen. Die Ausstellungen des Deutschen Buch- und Schriftmuseums sind von 10 bis 18 Uhr geöffnet. // Friday, 31 October 2025: The German National Library in Leipzig will be closed. The exhibitions of the German Museum of Books and Writing will open from 10:00 to 18:00.
 
 

Ergebnis der Suche nach: location=onlinefree



Treffer 17 von 3669564 < < > <



Online Ressourcen
Link zu diesem Datensatz https://d-nb.info/1378795199
Titel Explainable Artificial Intelligence : Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025, Proceedings, Part III / edited by Riccardo Guidotti, Ute Schmid, Luca Longo
Person(en) Guidotti, Riccardo (Herausgeber)
Schmid, Ute (Herausgeber)
Longo, Luca (Herausgeber)
Organisation(en) SpringerLink (Online service) (Sonstige)
Ausgabe 1st ed. 2026
Verlag Cham : Springer Nature Switzerland, Imprint: Springer
Zeitliche Einordnung Erscheinungsdatum: 2026
Umfang/Format Online-Ressource, XIX, 448 p. 149 illus., 143 illus. in color. : online resource.
Andere Ausgabe(n) Printed edition:: ISBN: 978-3-032-08326-5
Printed edition:: ISBN: 978-3-032-08328-9
Inhalt Generative AI meets Explainable AI -- Reasoning-Grounded Natural Language Explanations for Language Models -- What's Wrong with Your Synthetic Tabular Data? Using Explainable AI to Evaluate Generative Models -- Explainable Optimization: Leveraging Large Language Models for User-Friendly Explanations -- Large Language Models as Attribution Regularizers for Efficient Model Training -- GraphXAIN: Narratives to Explain Graph Neural Networks -- Intrinsically Interpretable Explainable AI -- MSL: Multiclass Scoring Lists for Interpretable Incremental Decision Making -- Interpretable World Model Imaginations as Deep Reinforcement Learning Explanation -- Unsupervised and Interpretable Detection of User Personalities in Online Social Networks -- An Interpretable Data-Driven Approach for Modeling Toxic Users Via Feature Extraction -- Assessing and Quantifying Perceived Trust in Interpretable Clinical Decision Support -- Benchmarking and XAI Evaluation Measures -- When can you Trust your Explanations? A Robustness Analysis on Feature Importances -- XAIEV – a Framework for the Evaluation of XAI-Algorithms for Image Classification -- From Input to Insight: Probing the Reasoning of Attention-based MIL Models -- Uncovering the Structure of Explanation Quality with Spectral Analysis -- Consolidating Explanation Stability Metrics -- XAI for Representational Alignment -- Reduction of Ocular Artefacts in EEG Signals Based on Interpretation of Variational Autoencoder Latent Space -- Syntax-Guided Metric-Based Class Activation Mapping -- Which Direction to Choose? An Analysis on the Representation Power of Self-Supervised ViTs in Downstream Tasks -- XpertAI: Uncovering Regression Model Strategies for Sub-manifolds -- An XAI-based Analysis of Shortcut Learning in Neural Networks
Persistent Identifier URN: urn:nbn:de:101:1-2510130417381.296962447521
DOI: 10.1007/978-3-032-08327-2
URL https://doi.org/10.1007/978-3-032-08327-2
ISBN/Einband/Preis 978-3-032-08327-2
Sprache(n) Englisch (eng)
Beziehungen Communications in Computer and Information Science ; 2578
Sachgruppe(n) 370 Erziehung, Schul- und Bildungswesen

Online-Zugriff Archivobjekt öffnen




Treffer 17 von 3669564
< < > <


E-Mail-IconAdministration