000 05541nam a22006615i 4500
001 978-3-030-28954-6
003 DE-He213
005 20240730175355.0
007 cr nn 008mamaa
008 190829s2019 sz | s |||| 0|eng d
020 _a9783030289546
_9978-3-030-28954-6
024 7 _a10.1007/978-3-030-28954-6
_2doi
050 4 _aQ334-342
050 4 _aTA347.A78
072 7 _aUYQ
_2bicssc
072 7 _aCOM004000
_2bisacsh
072 7 _aUYQ
_2thema
082 0 4 _a006.3
_223
245 1 0 _aExplainable AI: Interpreting, Explaining and Visualizing Deep Learning
_h[electronic resource] /
_cedited by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller.
250 _a1st ed. 2019.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2019.
300 _aXI, 439 p. 152 illus., 119 illus. in color.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aLecture Notes in Artificial Intelligence,
_x2945-9141 ;
_v11700
505 0 _aTowards Explainable Artificial Intelligence -- Transparency: Motivations and Challenges -- Interpretability in Intelligent Systems: A New Concept? -- Understanding Neural Networks via Feature Visualization: A Survey -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation -- Unsupervised Discrete Representation Learning -- Towards Reverse-Engineering Black-Box Neural Networks -- Explanations for Attributing Deep Neural Network Predictions -- Gradient-Based Attribution Methods -- Layer-Wise Relevance Propagation: An Overview -- Explaining and Interpreting LSTMs -- Comparing the Interpretability of Deep Networks via Network Dissection -- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison -- The (Un)reliability of Saliency Methods -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation -- Understanding Patch-Based Learningof Video Data by Explaining Predictions -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks -- Interpretable Deep Learning in Drug Discovery -- Neural Hydrology: Interpreting LSTMs in Hydrology -- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI -- Current Advances in Neural Decoding -- Software and Application Patterns for Explanation Methods.
520 _aThe development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.
650 0 _aArtificial intelligence.
_93407
650 0 _aComputer vision.
_9116546
650 0 _aComputers.
_98172
650 0 _aData protection.
_97245
650 0 _aComputer engineering.
_910164
650 0 _aComputer networks .
_931572
650 1 4 _aArtificial Intelligence.
_93407
650 2 4 _aComputer Vision.
_9116547
650 2 4 _aComputing Milieux.
_955441
650 2 4 _aData and Information Security.
_931990
650 2 4 _aComputer Engineering and Networks.
_9116548
700 1 _aSamek, Wojciech.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
_9116549
700 1 _aMontavon, Grégoire.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
_9116550
700 1 _aVedaldi, Andrea.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
_9116551
700 1 _aHansen, Lars Kai.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
_9116552
700 1 _aMüller, Klaus-Robert.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
_9116553
710 2 _aSpringerLink (Online service)
_9116554
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783030289539
776 0 8 _iPrinted edition:
_z9783030289553
830 0 _aLecture Notes in Artificial Intelligence,
_x2945-9141 ;
_v11700
_9116555
856 4 0 _uhttps://doi.org/10.1007/978-3-030-28954-6
912 _aZDB-2-SCS
912 _aZDB-2-SXCS
912 _aZDB-2-LNC
942 _cELN
999 _c89906
_d89906