Neural Networks with Model Compression (Record no. 87459)

000 -LEADER
fixed length control field 04153nam a22006015i 4500
001 - CONTROL NUMBER
control field 978-981-99-5068-3
005 - DATE AND TIME OF LATEST TRANSACTION
control field 20240730171227.0
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 240205s2024 si | s |||| 0|eng d
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
ISBN 9789819950683
-- 978-981-99-5068-3
082 04 - CLASSIFICATION NUMBER
Call Number 006.31
100 1# - AUTHOR NAME
Author Zhang, Baochang.
245 10 - TITLE STATEMENT
Title Neural Networks with Model Compression
250 ## - EDITION STATEMENT
Edition statement 1st ed. 2024.
300 ## - PHYSICAL DESCRIPTION
Number of Pages IX, 260 p. 101 illus., 67 illus. in color.
490 1# - SERIES STATEMENT
Series statement Computational Intelligence Methods and Applications,
505 0# - FORMATTED CONTENTS NOTE
Remark 2 Chapter 1. Introduction -- Chapter 2. Binary Neural Networks -- Chapter 3. Binary Neural Architecture Search -- Chapter 4. Quantization of Neural Networks -- Chapter 5. Network Pruning -- Chapter 6. Applications.
520 ## - SUMMARY, ETC.
Summary, etc Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
General subdivision Digital techniques.
700 1# - AUTHOR 2
Author 2 Wang, Tiancheng.
700 1# - AUTHOR 2
Author 2 Xu, Sheng.
700 1# - AUTHOR 2
Author 2 Doermann, David.
856 40 - ELECTRONIC LOCATION AND ACCESS
Uniform Resource Identifier https://doi.org/10.1007/978-981-99-5068-3
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Koha item type eBooks
264 #1 -
-- Singapore :
-- Springer Nature Singapore :
-- Imprint: Springer,
-- 2024.
336 ## -
-- text
-- txt
-- rdacontent
337 ## -
-- computer
-- c
-- rdamedia
338 ## -
-- online resource
-- cr
-- rdacarrier
347 ## -
-- text file
-- PDF
-- rda
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Machine learning.
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Artificial intelligence.
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Image processing
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Computer vision.
650 14 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Machine Learning.
650 24 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Artificial Intelligence.
650 24 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Computer Imaging, Vision, Pattern Recognition and Graphics.
650 24 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Computer Vision.
830 #0 - SERIES ADDED ENTRY--UNIFORM TITLE
-- 2510-1773
912 ## -
-- ZDB-2-SCS
912 ## -
-- ZDB-2-SXCS

No items available.