000 05129nam a22005895i 4500
001 978-3-031-57389-7
003 DE-He213
005 20240730172223.0
007 cr nn 008mamaa
008 240529s2024 sz | s |||| 0|eng d
020 _a9783031573897
_9978-3-031-57389-7
024 7 _a10.1007/978-3-031-57389-7
_2doi
050 4 _aTK5105.5-5105.9
072 7 _aUKN
_2bicssc
072 7 _aCOM043000
_2bisacsh
072 7 _aUKN
_2thema
082 0 4 _a004.6
_223
100 1 _aLi, Shaofeng.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_9102406
245 1 0 _aBackdoor Attacks against Learning-Based Algorithms
_h[electronic resource] /
_cby Shaofeng Li, Haojin Zhu, Wen Wu, Xuemin (Sherman) Shen.
250 _a1st ed. 2024.
264 1 _aCham :
_bSpringer Nature Switzerland :
_bImprint: Springer,
_c2024.
300 _aXI, 153 p. 58 illus., 56 illus. in color.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aWireless Networks,
_x2366-1445
505 0 _aIntroduction -- Literature Review of Backdoor Attacks -- Invisible Backdoor Attacks in Image Classification Based Network Services -- Hidden Backdoor Attacks in NLP Based Network Services -- Backdoor Attacks and Defense in FL -- Summary and Future Directions.
520 _aThis book introduces a new type of data poisoning attack, dubbed, backdoor attack. In backdoor attacks, an attacker can train the model with poisoned data to obtain a model that performs well on a normal input but behaves wrongly with crafted triggers. Backdoor attacks can occur in many scenarios where the training process is not entirely controlled, such as using third-party datasets, third-party platforms for training, or directly calling models provided by third parties. Due to the enormous threat that backdoor attacks pose to model supply chain security, they have received widespread attention from academia and industry. This book focuses on exploiting backdoor attacks in the three types of DNN applications, which are image classification, natural language processing, and federated learning. Based on the observation that DNN models are vulnerable to small perturbations, this book demonstrates that steganography and regularization can be adopted to enhance the invisibility of backdoor triggers. Based on image similarity measurement, this book presents two metrics to quantitatively measure the invisibility of backdoor triggers. The invisible trigger design scheme introduced in this book achieves a balance between the invisibility and the effectiveness of backdoor attacks. In the natural language processing domain, it is difficult to design and insert a general backdoor in a manner imperceptible to humans. Any corruption to the textual data (e.g., misspelled words or randomly inserted trigger words/sentences) must retain context-awareness and readability to human inspectors. This book introduces two novel hidden backdoor attacks, targeting three major natural language processing tasks, including toxic comment detection, neural machine translation, and question answering, depending on whether the targeted NLP platform accepts raw Unicode characters. The emerged distributed training framework, i.e., federated learning, has advantages in preserving users' privacy. It has been widely used in electronic medical applications, however, it also faced threats derived from backdoor attacks. This book presents a novel backdoor detection framework in FL-based e-Health systems. We hope this book can provide insightful lights on understanding the backdoor attacks in different types of learning-based algorithms, including computer vision, natural language processing, and federated learning. The systematic principle in this book also offers valuable guidance on the defense of backdoor attacks against future learning-based algorithms.
650 0 _aComputer networks .
_931572
650 0 _aWireless communication systems.
_93474
650 0 _aMobile communication systems.
_94051
650 0 _aMachine learning.
_91831
650 1 4 _aComputer Communication Networks.
_9102410
650 2 4 _aWireless and Mobile Communication.
_9102412
650 2 4 _aMachine Learning.
_91831
700 1 _aZhu, Haojin.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_9102413
700 1 _aWu, Wen.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_9102414
700 1 _aShen, Xuemin (Sherman).
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_9102415
710 2 _aSpringerLink (Online service)
_9102417
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031573880
776 0 8 _iPrinted edition:
_z9783031573903
776 0 8 _iPrinted edition:
_z9783031573910
830 0 _aWireless Networks,
_x2366-1445
_9102418
856 4 0 _uhttps://doi.org/10.1007/978-3-031-57389-7
912 _aZDB-2-SCS
912 _aZDB-2-SXCS
942 _cEBK
999 _c88110
_d88110