Normal view MARC view ISBD view

Parallel Computing Architectures and APIs [electronic resource] : IoT Big Data Stream Processing.

By: Kale, Vivek.
Material type: materialTypeLabelBookPublisher: Milton : CRC Press LLC, 2019Description: 1 online resource (407 p.).ISBN: 9781351029216; 1351029215; 9781351029223; 1351029223; 9781351029209; 1351029207; 9781351029193; 1351029193.Subject(s): Parallel programming (Computer science) | Computer programming | COMPUTERS / General | COMPUTERS / Computer Graphics / Game Programming & Design | COMPUTERS / Information TechnologyDDC classification: 005.275 Online resources: Taylor & Francis | OCLC metadata license agreement
Contents:
Cover; Half Title; Title Page; Copyright Page; Dedication; Contents; Preface; Acknowledgments; Author; 1: Uniprocessor Computers; 1.1 Types of Computers; 1.1.1 Microcomputers; 1.1.2 Midrange Computers; 1.1.3 Mainframe Computers; 1.1.4 Supercomputers; 1.2 Computer System; 1.2.1 Hardware; 1.2.2 Software; 1.2.3 Network; 1.3 Hardware and Software Logical Equivalence; 1.4 Ladder of Abstraction; 1.4.1 Modeling-Level Architecture; 1.4.2 Algorithm-Level Architecture; 1.4.3 High-Level Architecture; 1.4.4 Assembly-Level Architecture; 1.4.5 System or Instruction Set Architecture-Level Architecture
1.4.6 Machine or Microarchitecture-Level Architecture1.4.7 Control or Logic-Level Architecture; 1.4.8 Device-Level Architecture; 1.5 Application Programming Interfaces; 1.6 Summary; 2: Processor Physics and Moore's Law; 2.1 Speed of Processing and Power Problem; 2.2 Area, Delay, and Power Consumption; 2.2.1 Area Consumption; 2.2.2 Delay Consumption; 2.2.3 Power Consumption; 2.3 Area, Latency, and Power Trade-offs; 2.3.1 Area versus Delay Trade-off; 2.3.2 Delay versus Power Trade-off; 2.3.3 Area versus Delay versus Power Trade-off; 2.4 Moore's Law; 2.4.1 Leveraging Moore's Law
2.4.1.1 Reconfigurable Computing2.5 Performance Wall; 2.5.1 Power Wall; 2.5.2 Memory Wall; 2.5.3 Instruction-Level Parallelism Wall; 2.6 Summary; Section I: Genesis of Parallel Computing; 3: Processor Basics; 3.1 Processor; 3.2 Aspects of Processor Performance; 3.2.1 Potential for Speedup; 3.2.2 Scalability; 3.2.3 Speedup versus Communication Overhead; 3.3 Enhancing Uniprocessor Performance; 3.3.1 Improving CPU Performance; 3.3.2 Increasing Processor Clock Frequency; 3.3.3 Parallelizing Arithmetic Logic Unit (ALU) Structure; 3.3.4 Pipelining; 3.3.5 Memory Hierarchy; 3.3.5.1 Cache Memory
3.3.6 Very Long Instruction Word (VLIW) Processors3.3.7 Superscalarity; 3.3.8 Instruction-Level Parallelism; 3.3.9 Multicore Architectures; 3.3.10 Multithreading; 3.4 Summary; 4: Networking Basics; 4.1 Network Principles; 4.1.1 Protocol; 4.1.2 Protocol Layers; 4.1.3 Protocol Suite; 4.1.4 Datagram; 4.2 Types of Networks; 4.2.1 Personal Area Networks; 4.2.2 Local Area Networks; 4.2.3 Metropolitan Area Networks; 4.2.4 Wide Area Networks; 4.3 Network Models; 4.3.1 OSI Reference Model; 4.3.2 TCP/IP Reference Model; 4.3.2.1 Link Layer; 4.3.2.2 Internet Layer; 4.3.2.3 Transport Layer
4.3.2.4 Application Layer4.4 Interconnection Networks; 4.4.1 Ethernet; 4.4.2 Switches; 4.5 Summary; 5: Distributed Systems Basics; 5.1 Distributed Systems; 5.1.1 Distributed Computing; 5.1.1.1 System Architectural Styles; 5.1.1.2 Software Architectural Styles; 5.1.1.3 Technologies for Distributed Computing; 5.2 Distributed System Benefits; 5.3 Distributed Computation Systems; 5.4 Summary; Section II: Road to Parallel Computing; 6: Parallel Systems; 6.1 Flynn's Taxonomy for Parallel Computer Architectures; 6.2 Types of Parallel Computers; 6.2.1 Shared Memory Multiprocessor Systems
Summary: Parallel Computing Architectures and APIs: IoT Big Data Stream Processing commences from the point high-performance uniprocessors were becoming increasingly complex, expensive, and power-hungry. A basic trade-off exists between the use of one or a small number of such complex processors, at one extreme, and a moderate to very large number of simpler processors, at the other. When combined with a high-bandwidth, interprocessor communication facility leads to significant simplification of the design process. However, two major roadblocks prevent the widespread adoption of such moderately to massively parallel architectures: the interprocessor communication bottleneck, and the difficulty and high cost of algorithm/software development. One of the most important reasons for studying parallel computing architectures is to learn how to extract the best performance from parallel systems. Specifically, you must understand its architectures so that you will be able to exploit those architectures during programming via the standardized APIs. This book would be useful for analysts, designers and developers of high-throughput computing systems essential for big data stream processing emanating from IoT-driven cyber-physical systems (CPS). This pragmatic book: Devolves uniprocessors in terms of a ladder of abstractions to ascertain (say) performance characteristics at a particular level of abstraction Explains limitations of uniprocessor high performance because of Moore's Law Introduces basics of processors, networks and distributed systems Explains characteristics of parallel systems, parallel computing models and parallel algorithms Explains the three primary categorical representatives of parallel computing architectures, namely, shared memory, message passing and stream processing Introduces the three primary categorical representatives of parallel programming APIs, namely, OpenMP, MPI and CUDA Provides an overview of Internet of Things (IoT), wireless sensor networks (WSN), sensor data processing, Big Data and stream processing Provides introduction to 5G communications, Edge and Fog computing Parallel Computing Architectures and APIs: IoT Big Data Stream Processing discusses stream processing that enables the gathering, processing and analysis of high-volume, heterogeneous, continuous Internet of Things (IoT) big data streams, to extract insights and actionable results in real time. Application domains requiring data stream management include military, homeland security, sensor networks, financial applications, network management, web site performance tracking, real-time credit card fraud detection, etc.
    average rating: 0.0 (0 votes)
No physical items for this record

Description based upon print version of record.

Cover; Half Title; Title Page; Copyright Page; Dedication; Contents; Preface; Acknowledgments; Author; 1: Uniprocessor Computers; 1.1 Types of Computers; 1.1.1 Microcomputers; 1.1.2 Midrange Computers; 1.1.3 Mainframe Computers; 1.1.4 Supercomputers; 1.2 Computer System; 1.2.1 Hardware; 1.2.2 Software; 1.2.3 Network; 1.3 Hardware and Software Logical Equivalence; 1.4 Ladder of Abstraction; 1.4.1 Modeling-Level Architecture; 1.4.2 Algorithm-Level Architecture; 1.4.3 High-Level Architecture; 1.4.4 Assembly-Level Architecture; 1.4.5 System or Instruction Set Architecture-Level Architecture

1.4.6 Machine or Microarchitecture-Level Architecture1.4.7 Control or Logic-Level Architecture; 1.4.8 Device-Level Architecture; 1.5 Application Programming Interfaces; 1.6 Summary; 2: Processor Physics and Moore's Law; 2.1 Speed of Processing and Power Problem; 2.2 Area, Delay, and Power Consumption; 2.2.1 Area Consumption; 2.2.2 Delay Consumption; 2.2.3 Power Consumption; 2.3 Area, Latency, and Power Trade-offs; 2.3.1 Area versus Delay Trade-off; 2.3.2 Delay versus Power Trade-off; 2.3.3 Area versus Delay versus Power Trade-off; 2.4 Moore's Law; 2.4.1 Leveraging Moore's Law

2.4.1.1 Reconfigurable Computing2.5 Performance Wall; 2.5.1 Power Wall; 2.5.2 Memory Wall; 2.5.3 Instruction-Level Parallelism Wall; 2.6 Summary; Section I: Genesis of Parallel Computing; 3: Processor Basics; 3.1 Processor; 3.2 Aspects of Processor Performance; 3.2.1 Potential for Speedup; 3.2.2 Scalability; 3.2.3 Speedup versus Communication Overhead; 3.3 Enhancing Uniprocessor Performance; 3.3.1 Improving CPU Performance; 3.3.2 Increasing Processor Clock Frequency; 3.3.3 Parallelizing Arithmetic Logic Unit (ALU) Structure; 3.3.4 Pipelining; 3.3.5 Memory Hierarchy; 3.3.5.1 Cache Memory

3.3.6 Very Long Instruction Word (VLIW) Processors3.3.7 Superscalarity; 3.3.8 Instruction-Level Parallelism; 3.3.9 Multicore Architectures; 3.3.10 Multithreading; 3.4 Summary; 4: Networking Basics; 4.1 Network Principles; 4.1.1 Protocol; 4.1.2 Protocol Layers; 4.1.3 Protocol Suite; 4.1.4 Datagram; 4.2 Types of Networks; 4.2.1 Personal Area Networks; 4.2.2 Local Area Networks; 4.2.3 Metropolitan Area Networks; 4.2.4 Wide Area Networks; 4.3 Network Models; 4.3.1 OSI Reference Model; 4.3.2 TCP/IP Reference Model; 4.3.2.1 Link Layer; 4.3.2.2 Internet Layer; 4.3.2.3 Transport Layer

4.3.2.4 Application Layer4.4 Interconnection Networks; 4.4.1 Ethernet; 4.4.2 Switches; 4.5 Summary; 5: Distributed Systems Basics; 5.1 Distributed Systems; 5.1.1 Distributed Computing; 5.1.1.1 System Architectural Styles; 5.1.1.2 Software Architectural Styles; 5.1.1.3 Technologies for Distributed Computing; 5.2 Distributed System Benefits; 5.3 Distributed Computation Systems; 5.4 Summary; Section II: Road to Parallel Computing; 6: Parallel Systems; 6.1 Flynn's Taxonomy for Parallel Computer Architectures; 6.2 Types of Parallel Computers; 6.2.1 Shared Memory Multiprocessor Systems

6.2.2 Distributed Memory Multicomputers

Parallel Computing Architectures and APIs: IoT Big Data Stream Processing commences from the point high-performance uniprocessors were becoming increasingly complex, expensive, and power-hungry. A basic trade-off exists between the use of one or a small number of such complex processors, at one extreme, and a moderate to very large number of simpler processors, at the other. When combined with a high-bandwidth, interprocessor communication facility leads to significant simplification of the design process. However, two major roadblocks prevent the widespread adoption of such moderately to massively parallel architectures: the interprocessor communication bottleneck, and the difficulty and high cost of algorithm/software development. One of the most important reasons for studying parallel computing architectures is to learn how to extract the best performance from parallel systems. Specifically, you must understand its architectures so that you will be able to exploit those architectures during programming via the standardized APIs. This book would be useful for analysts, designers and developers of high-throughput computing systems essential for big data stream processing emanating from IoT-driven cyber-physical systems (CPS). This pragmatic book: Devolves uniprocessors in terms of a ladder of abstractions to ascertain (say) performance characteristics at a particular level of abstraction Explains limitations of uniprocessor high performance because of Moore's Law Introduces basics of processors, networks and distributed systems Explains characteristics of parallel systems, parallel computing models and parallel algorithms Explains the three primary categorical representatives of parallel computing architectures, namely, shared memory, message passing and stream processing Introduces the three primary categorical representatives of parallel programming APIs, namely, OpenMP, MPI and CUDA Provides an overview of Internet of Things (IoT), wireless sensor networks (WSN), sensor data processing, Big Data and stream processing Provides introduction to 5G communications, Edge and Fog computing Parallel Computing Architectures and APIs: IoT Big Data Stream Processing discusses stream processing that enables the gathering, processing and analysis of high-volume, heterogeneous, continuous Internet of Things (IoT) big data streams, to extract insights and actionable results in real time. Application domains requiring data stream management include military, homeland security, sensor networks, financial applications, network management, web site performance tracking, real-time credit card fraud detection, etc.

OCLC-licensed vendor bibliographic record.

There are no comments for this item.

Log in to your account to post a comment.