Attacks, Defenses and Testing for Deep Learning (Record no. 88250)

000 -LEADER
fixed length control field 05701nam a22005655i 4500
001 - CONTROL NUMBER
control field 978-981-97-0425-5
005 - DATE AND TIME OF LATEST TRANSACTION
control field 20240730172355.0
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 240603s2024 si | s |||| 0|eng d
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
ISBN 9789819704255
-- 978-981-97-0425-5
082 04 - CLASSIFICATION NUMBER
Call Number 006.3
100 1# - AUTHOR NAME
Author Chen, Jinyin.
245 10 - TITLE STATEMENT
Title Attacks, Defenses and Testing for Deep Learning
250 ## - EDITION STATEMENT
Edition statement 1st ed. 2024.
300 ## - PHYSICAL DESCRIPTION
Number of Pages XX, 399 p. 128 illus., 126 illus. in color.
505 0# - FORMATTED CONTENTS NOTE
Remark 2 Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm -- Feature Transfer Based Stealthy Poisoning Attack for DNNs -- Adversarial Attacks on GNN Based Vertical Federated Learning -- A Novel DNN Object Contour Attack on Image Recognition -- Query-Efficient Adversarial Attack Against Vertical Federated Graph Learning -- Targeted Label Adversarial Attack on Graph Embedding -- Backdoor Attack on Dynamic Link Prediction -- Attention Mechanism based Adversarial Attack against DRL -- Characterizing Adversarial Examples via Local Gradient Checking -- A Novel Adversarial Defense by Refocusing on Critical Areas -- Neuron-level Inverse Perturbation Against Adversarial Attacks -- Adaptive Channel Transformation-based Detector for Adversarial Attacks -- Defense Against Free-rider Attack From the Weight Evolving Frequency -- An Effective Model Copyright Protection for Federated Learning -- Guard the vertical federated graph learning from Property Inference Attack -- Using Adversarial Examples to Against Backdoor Attack in FL -- Evaluating the Adversarial Robustness of Deep Model by Decision Boundaries -- Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space -- Interpretable White-Box Fairness Testing through Biased Neuron Identification -- A Deep Learning Framework for Dynamic Network Link Prediction. .
520 ## - SUMMARY, ETC.
Summary, etc This book provides a systematic study on the security of deep learning. With its powerful learning ability, deep learning is widely used in CV, FL, GNN, RL, and other scenarios. However, during the process of application, researchers have revealed that deep learning is vulnerable to malicious attacks, which will lead to unpredictable consequences. Take autonomous driving as an example, there were more than 12 serious autonomous driving accidents in the world in 2018, including Uber, Tesla and other high technological enterprises. Drawing on the reviewed literature, we need to discover vulnerabilities in deep learning through attacks, reinforce its defense, and test model performance to ensure its robustness. Attacks can be divided into adversarial attacks and poisoning attacks. Adversarial attacks occur during the model testing phase, where the attacker obtains adversarial examples by adding small perturbations. Poisoning attacks occur during the model training phase, where the attacker injects poisoned examples into the training dataset, embedding a backdoor trigger in the trained deep learning model. An effective defense method is an important guarantee for the application of deep learning. The existing defense methods are divided into three types, including the data modification defense method, model modification defense method, and network add-on method. The data modification defense method performs adversarial defense by fine-tuning the input data. The model modification defense method adjusts the model framework to achieve the effect of defending against attacks. The network add-on method prevents the adversarial examples by training the adversarial example detector. Testing deep neural networks is an effective method to measure the security and robustness of deep learning models. Through test evaluation, security vulnerabilities and weaknesses in deep neural networks can be identified. By identifying and fixing these vulnerabilities, the security and robustness of the model can be improved. Our audience includes researchers in the field of deep learning security, as well as software development engineers specializing in deep learning.
700 1# - AUTHOR 2
Author 2 Zhang, Ximin.
700 1# - AUTHOR 2
Author 2 Zheng, Haibin.
856 40 - ELECTRONIC LOCATION AND ACCESS
Uniform Resource Identifier https://doi.org/10.1007/978-981-97-0425-5
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Koha item type eBooks
264 #1 -
-- Singapore :
-- Springer Nature Singapore :
-- Imprint: Springer,
-- 2024.
336 ## -
-- text
-- txt
-- rdacontent
337 ## -
-- computer
-- c
-- rdamedia
338 ## -
-- online resource
-- cr
-- rdacarrier
347 ## -
-- text file
-- PDF
-- rda
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Artificial intelligence.
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Computer engineering.
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Computer networks .
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Neural networks (Computer science) .
650 14 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Artificial Intelligence.
650 24 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Computer Engineering and Networks.
650 24 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Mathematical Models of Cognitive Processes and Neural Networks.
912 ## -
-- ZDB-2-SCS
912 ## -
-- ZDB-2-SXCS

No items available.