Normal view MARC view ISBD view

Explainable Artificial Intelligence [electronic resource] : Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part III / edited by Luca Longo, Sebastian Lapuschkin, Christin Seifert.

Contributor(s): Longo, Luca [editor.] | Lapuschkin, Sebastian [editor.] | Seifert, Christin [editor.] | SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Communications in Computer and Information Science: 2155Publisher: Cham : Springer Nature Switzerland : Imprint: Springer, 2024Edition: 1st ed. 2024.Description: XVII, 456 p. 130 illus., 103 illus. in color. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783031638008.Subject(s): Artificial intelligence | Natural language processing (Computer science) | Application software | Computer networks  | Artificial Intelligence | Natural Language Processing (NLP) | Computer and Information Systems Applications | Computer Communication NetworksAdditional physical formats: Printed edition:: No title; Printed edition:: No titleDDC classification: 006.3 Online resources: Click here to access online
Contents:
-- Counterfactual explanations and causality for eXplainable AI. -- Sub-SpaCE: Subsequence-based Sparse Counterfactual Explanations for Time Series Classification Problems. -- Human-in-the-loop Personalized Counterfactual Recourse. -- COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images. -- Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence. -- CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests. -- Causality-Aware Local Interpretable Model-Agnostic Explanations. -- Evaluating the Faithfulness of Causality in Saliency-based Explanations of Deep Learning Models for Temporal Colour Constancy. -- CAGE: Causality-Aware Shapley Value for Global Explanations. -- Fairness, trust, privacy, security, accountability and actionability in eXplainable AI. -- Exploring the Reliability of SHAP Values in Reinforcement Learning. -- Categorical Foundation of Explainable AI: A Unifying Theory. -- Investigating Calibrated Classification Scores through the Lens of Interpretability. -- XentricAI: A Gesture Sensing Calibration Approach through Explainable and User-Centric AI. -- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution. -- ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework. -- Differential Privacy for Anomaly Detection: Analyzing the Trade-off Between Privacy and Explainability. -- Blockchain for Ethical & Transparent Generative AI Utilization by Banking & Finance Lawyers. -- Multi-modal Machine learning model for Interpretable Mobile Malware Classification. -- Explainable Fraud Detection with Deep Symbolic Classification. -- Better Luck Next Time: About Robust Recourse in Binary Allocation Problems. -- Towards Non-Adversarial Algorithmic Recourse. -- Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring. -- XAI for Time Series Classification: Evaluating the Benefits of Model Inspection for End-Users.
In: Springer Nature eBookSummary: This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.
    average rating: 0.0 (0 votes)
No physical items for this record

-- Counterfactual explanations and causality for eXplainable AI. -- Sub-SpaCE: Subsequence-based Sparse Counterfactual Explanations for Time Series Classification Problems. -- Human-in-the-loop Personalized Counterfactual Recourse. -- COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images. -- Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence. -- CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests. -- Causality-Aware Local Interpretable Model-Agnostic Explanations. -- Evaluating the Faithfulness of Causality in Saliency-based Explanations of Deep Learning Models for Temporal Colour Constancy. -- CAGE: Causality-Aware Shapley Value for Global Explanations. -- Fairness, trust, privacy, security, accountability and actionability in eXplainable AI. -- Exploring the Reliability of SHAP Values in Reinforcement Learning. -- Categorical Foundation of Explainable AI: A Unifying Theory. -- Investigating Calibrated Classification Scores through the Lens of Interpretability. -- XentricAI: A Gesture Sensing Calibration Approach through Explainable and User-Centric AI. -- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution. -- ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework. -- Differential Privacy for Anomaly Detection: Analyzing the Trade-off Between Privacy and Explainability. -- Blockchain for Ethical & Transparent Generative AI Utilization by Banking & Finance Lawyers. -- Multi-modal Machine learning model for Interpretable Mobile Malware Classification. -- Explainable Fraud Detection with Deep Symbolic Classification. -- Better Luck Next Time: About Robust Recourse in Binary Allocation Problems. -- Towards Non-Adversarial Algorithmic Recourse. -- Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring. -- XAI for Time Series Classification: Evaluating the Benefits of Model Inspection for End-Users.

This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.

There are no comments for this item.

Log in to your account to post a comment.