000 03756nam a22005415i 4500
001 978-3-319-99223-5
003 DE-He213
005 20220801214933.0
007 cr nn 008mamaa
008 181023s2019 sz | s |||| 0|eng d
020 _a9783319992235
_9978-3-319-99223-5
024 7 _a10.1007/978-3-319-99223-5
_2doi
050 4 _aTK7867-7867.5
072 7 _aTJFC
_2bicssc
072 7 _aTEC008010
_2bisacsh
072 7 _aTJFC
_2thema
082 0 4 _a621.3815
_223
100 1 _aMoons, Bert.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_941367
245 1 0 _aEmbedded Deep Learning
_h[electronic resource] :
_bAlgorithms, Architectures and Circuits for Always-on Neural Network Processing /
_cby Bert Moons, Daniel Bankman, Marian Verhelst.
250 _a1st ed. 2019.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2019.
300 _aXVI, 206 p. 124 illus., 92 illus. in color.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
505 0 _aChapter 1 Embedded Deep Neural Networks -- Chapter 2 Optimized Hierarchical Cascaded Processing -- Chapter 3 Hardware-Algorithm Co-optimizations -- Chapter 4 Circuit Techniques for Approximate Computing -- Chapter 5 ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing -- Chapter 6 BINAREYE: Digital and Mixed-signal Always-on Binary Neural Network Processing -- Chapter 7 Conclusions, contributions and future work.
520 _aThis book covers algorithmic and hardware implementation techniques to enable embedded deep learning. The authors describe synergetic design approaches on the application-, algorithmic-, computer architecture-, and circuit-level that will help in achieving the goal of reducing the computational cost of deep learning algorithms. The impact of these techniques is displayed in four silicon prototypes for embedded deep learning. Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices; Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy – applications, algorithms, hardware architectures, and circuits – supported by real silicon prototypes; Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations; Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization’s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts.
650 0 _aElectronic circuits.
_919581
650 0 _aSignal processing.
_94052
650 0 _aElectronics.
_93425
650 1 4 _aElectronic Circuits and Systems.
_941368
650 2 4 _aSignal, Speech and Image Processing .
_931566
650 2 4 _aElectronics and Microelectronics, Instrumentation.
_932249
700 1 _aBankman, Daniel.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_941369
700 1 _aVerhelst, Marian.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_941370
710 2 _aSpringerLink (Online service)
_941371
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783319992228
776 0 8 _iPrinted edition:
_z9783319992242
776 0 8 _iPrinted edition:
_z9783030075774
856 4 0 _uhttps://doi.org/10.1007/978-3-319-99223-5
912 _aZDB-2-ENG
912 _aZDB-2-SXE
942 _cEBK
999 _c76923
_d76923