Normal view MARC view ISBD view

Multi-processor system-on-chip. 1, Architectures / coordinated by Liliana Andrade, Frédéric Rousseau.

Contributor(s): Andrade, Liliana | Rousseau, Frédéric, 1967-.
Material type: materialTypeLabelBookPublisher: London : Hoboken : ISTE, Ltd. ; Wiley, 2021Description: 1 online resource (321 p.).ISBN: 9781119818298; 111981829X; 9781119818274; 1119818273.Other title: Architectures.Subject(s): Systems on a chip | Multiprocessors | Multiprocessors | Systems on a chipGenre/Form: Electronic books.Additional physical formats: Print version:: Multi-Processor System-On-Chip 1DDC classification: 006.2/2 Online resources: Wiley Online Library
Contents:
Cover -- Half-Title Page -- Dedication -- Title Page -- Copyright Page -- Contents -- Foreword -- Acknowledgments -- PART 1: Processors -- 1 Processors for the Internet of Things -- 1.1. Introduction -- 1.2. Versatile processors for low-power IoT edge devices -- 1.2.1. Control processing, DSP and machine learning -- 1.2.2. Configurability and extensibility -- 1.3. Machine learning inference -- 1.3.1. Requirements for low/mid-end machine learning inference -- 1.3.2. Processor capabilities for low-power machine learning inference -- 1.3.3. A software library for machine learning inference
1.3.4. Example machine learning applications and benchmarks -- 1.4. Conclusion -- 1.5. References -- 2 A Qualitative Approach to Many-core Architecture -- 2.1. Introduction -- 2.2. Motivations and context -- 2.2.1. Many-core processors -- 2.2.2. Machine learning inference -- 2.2.3. Application requirements -- 2.3. The MPPA3 many-core processor -- 2.3.1. Global architecture -- 2.3.2. Compute cluster -- 2.3.3. VLIW core -- 2.3.4. Coprocessor -- 2.4. The MPPA3 software environments -- 2.4.1. High-performance computing -- 2.4.2. KaNN code generator -- 2.4.3. High-integrity computing
2.5. Conclusion -- 2.6. References -- 3 The Plural Many-core Architecture -- High Performance at Low Power -- 3.1. Introduction -- 3.2. Related works -- 3.3. Plural many-core architecture -- 3.4. Plural programming model -- 3.5. Plural hardware scheduler/synchronizer -- 3.6. Plural networks-on-chip -- 3.6.1. Scheduler NoC -- 3.6.2. Shared memory NoC -- 3.7. Hardware and software accelerators for the Plural architecture -- 3.8. Plural system software -- 3.9. Plural software development tools -- 3.10. Matrix multiplication algorithm on the Plural architecture
4 ASIP-Based Multi-Processor Systems for an Efficient Implementation of CNNs -- 4.1. Introduction -- 4.2. Related works -- 4.3. ASIP architecture -- 4.4. Single-core scaling -- 4.5. MPSoC overview -- 4.6. NoC parameter exploration -- 4.7. Summary and conclusion -- 4.8. References -- PART 2: Memory -- 5 Tackling the MPSoC DataLocality Challenge -- 5.1. Motivation -- 5.2. MPSoC target platform -- 5.3. Related work -- 5.4. Coherence-on-demand: region-based cache coherence -- 5.4.1. RBCC versus global coherence -- 5.4.2. OS extensions for coherence-on-demand -- 5.4.3. Coherency region manager
5.4.4. Experimental evaluations -- 5.4.5. RBCC and data placement -- 5.5. Near-memory acceleration -- 5.5.1. Near-memory synchronization accelerator -- 5.5.2. Near-memory queue management accelerator -- 5.5.3. Near-memory graph copy accelerator -- 5.5.4. Near-cache accelerator -- 5.6. The big picture -- 5.7. Conclusion -- 5.8. Acknowledgments -- 5.9. References -- 6 mMPU: Building a Memristor-based General-purpose In-memory Computation Architecture -- 6.1. Introduction -- 6.2. MAGIC NOR gate -- 6.3. In-memory algorithms for latency reduction -- 6.4. Synthesis and in-memory mapping methods
    average rating: 0.0 (0 votes)
No physical items for this record

Description based upon print version of record.

Cover -- Half-Title Page -- Dedication -- Title Page -- Copyright Page -- Contents -- Foreword -- Acknowledgments -- PART 1: Processors -- 1 Processors for the Internet of Things -- 1.1. Introduction -- 1.2. Versatile processors for low-power IoT edge devices -- 1.2.1. Control processing, DSP and machine learning -- 1.2.2. Configurability and extensibility -- 1.3. Machine learning inference -- 1.3.1. Requirements for low/mid-end machine learning inference -- 1.3.2. Processor capabilities for low-power machine learning inference -- 1.3.3. A software library for machine learning inference

1.3.4. Example machine learning applications and benchmarks -- 1.4. Conclusion -- 1.5. References -- 2 A Qualitative Approach to Many-core Architecture -- 2.1. Introduction -- 2.2. Motivations and context -- 2.2.1. Many-core processors -- 2.2.2. Machine learning inference -- 2.2.3. Application requirements -- 2.3. The MPPA3 many-core processor -- 2.3.1. Global architecture -- 2.3.2. Compute cluster -- 2.3.3. VLIW core -- 2.3.4. Coprocessor -- 2.4. The MPPA3 software environments -- 2.4.1. High-performance computing -- 2.4.2. KaNN code generator -- 2.4.3. High-integrity computing

2.5. Conclusion -- 2.6. References -- 3 The Plural Many-core Architecture -- High Performance at Low Power -- 3.1. Introduction -- 3.2. Related works -- 3.3. Plural many-core architecture -- 3.4. Plural programming model -- 3.5. Plural hardware scheduler/synchronizer -- 3.6. Plural networks-on-chip -- 3.6.1. Scheduler NoC -- 3.6.2. Shared memory NoC -- 3.7. Hardware and software accelerators for the Plural architecture -- 3.8. Plural system software -- 3.9. Plural software development tools -- 3.10. Matrix multiplication algorithm on the Plural architecture

4 ASIP-Based Multi-Processor Systems for an Efficient Implementation of CNNs -- 4.1. Introduction -- 4.2. Related works -- 4.3. ASIP architecture -- 4.4. Single-core scaling -- 4.5. MPSoC overview -- 4.6. NoC parameter exploration -- 4.7. Summary and conclusion -- 4.8. References -- PART 2: Memory -- 5 Tackling the MPSoC DataLocality Challenge -- 5.1. Motivation -- 5.2. MPSoC target platform -- 5.3. Related work -- 5.4. Coherence-on-demand: region-based cache coherence -- 5.4.1. RBCC versus global coherence -- 5.4.2. OS extensions for coherence-on-demand -- 5.4.3. Coherency region manager

5.4.4. Experimental evaluations -- 5.4.5. RBCC and data placement -- 5.5. Near-memory acceleration -- 5.5.1. Near-memory synchronization accelerator -- 5.5.2. Near-memory queue management accelerator -- 5.5.3. Near-memory graph copy accelerator -- 5.5.4. Near-cache accelerator -- 5.6. The big picture -- 5.7. Conclusion -- 5.8. Acknowledgments -- 5.9. References -- 6 mMPU: Building a Memristor-based General-purpose In-memory Computation Architecture -- 6.1. Introduction -- 6.2. MAGIC NOR gate -- 6.3. In-memory algorithms for latency reduction -- 6.4. Synthesis and in-memory mapping methods

6.4.1. SIMPLE.

Wiley Frontlist Obook All English 2021

There are no comments for this item.

Log in to your account to post a comment.