Normal view MARC view ISBD view

Parallel Processing, 1980 to 2020 [electronic resource] / by Robert Kuhn, David Padua.

By: Kuhn, Robert [author.].
Contributor(s): Padua, David [author.] | SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Synthesis Lectures on Computer Architecture: Publisher: Cham : Springer International Publishing : Imprint: Springer, 2021Edition: 1st ed. 2021.Description: XXIII, 166 p. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783031017681.Subject(s): Electronic circuits | Microprocessors | Computer architecture | Electronic Circuits and Systems | Processor ArchitecturesAdditional physical formats: Printed edition:: No title; Printed edition:: No title; Printed edition:: No titleDDC classification: 621.3815 Online resources: Click here to access online
Contents:
Foreword by David Kuck -- Preface -- Acknowledgments -- Introduction -- Parallel Hardware -- Programming Notations and Compilers -- Applications -- Parallel Hardware Today and Tomorrow -- Concluding Remarks -- Appendix A: Myths and Misconceptions about Parallelism -- Appendix B: Bibliographic Notes -- Appendix C: Taxonomic Notes -- Appendix D: The 1981 Tutorial -- References -- Authors'Biographies .
In: Springer Nature eBookSummary: This historical survey of parallel processing from 1980 to 2020 is a follow-up to the authors' 1981 Tutorial on Parallel Processing, which covered the state of the art in hardware, programming languages, and applications. Here, we cover the evolution of the field since 1980 in: parallel computers, ranging from the Cyber 205 to clusters now approaching an exaflop, to multicore microprocessors, and Graphic Processing Units (GPUs) in commodity personal devices; parallel programming notations such as OpenMP, MPI message passing, and CUDA streaming notation; and seven parallel applications, such as finite element analysis and computer vision. Some things that looked like they would be major trends in 1981, such as big Single Instruction Multiple Data arrays disappeared for some time but have been revived recently in deep neural network processors. There are now major trends that did not exist in 1980, such as GPUs, distributed memory machines, and parallel processing in nearly every commodity device. This book is intended for those that already have some knowledge of parallel processing today and want to learn about the history of the three areas. In parallel hardware, every major parallel architecture type from 1980 has scaled-up in performance and scaled-out into commodity microprocessors and GPUs, so that every personal and embedded device is a parallel processor. There has been a confluence of parallel architecture types into hybrid parallel systems. Much of the impetus for change has been Moore's Law, but as clock speed increases have stopped and feature size decreases have slowed down, there has been increased demand on parallel processing to continue performance gains. In programming notations and compilers, we observe that the roots of today's programming notations existed before 1980. And that, through a great deal of research, the most widely used programming notations today, although the result of much broadening of these roots, remain close to target system architectures allowing the programmer to almost explicitly use the target's parallelism to the best of their ability. The parallel versions of applications directly or indirectly impact nearly everyone, computer expert or not, and parallelism has brought about major breakthroughs in numerous application areas. Seven parallel applications are studied in this book.
    average rating: 0.0 (0 votes)
No physical items for this record

Foreword by David Kuck -- Preface -- Acknowledgments -- Introduction -- Parallel Hardware -- Programming Notations and Compilers -- Applications -- Parallel Hardware Today and Tomorrow -- Concluding Remarks -- Appendix A: Myths and Misconceptions about Parallelism -- Appendix B: Bibliographic Notes -- Appendix C: Taxonomic Notes -- Appendix D: The 1981 Tutorial -- References -- Authors'Biographies .

This historical survey of parallel processing from 1980 to 2020 is a follow-up to the authors' 1981 Tutorial on Parallel Processing, which covered the state of the art in hardware, programming languages, and applications. Here, we cover the evolution of the field since 1980 in: parallel computers, ranging from the Cyber 205 to clusters now approaching an exaflop, to multicore microprocessors, and Graphic Processing Units (GPUs) in commodity personal devices; parallel programming notations such as OpenMP, MPI message passing, and CUDA streaming notation; and seven parallel applications, such as finite element analysis and computer vision. Some things that looked like they would be major trends in 1981, such as big Single Instruction Multiple Data arrays disappeared for some time but have been revived recently in deep neural network processors. There are now major trends that did not exist in 1980, such as GPUs, distributed memory machines, and parallel processing in nearly every commodity device. This book is intended for those that already have some knowledge of parallel processing today and want to learn about the history of the three areas. In parallel hardware, every major parallel architecture type from 1980 has scaled-up in performance and scaled-out into commodity microprocessors and GPUs, so that every personal and embedded device is a parallel processor. There has been a confluence of parallel architecture types into hybrid parallel systems. Much of the impetus for change has been Moore's Law, but as clock speed increases have stopped and feature size decreases have slowed down, there has been increased demand on parallel processing to continue performance gains. In programming notations and compilers, we observe that the roots of today's programming notations existed before 1980. And that, through a great deal of research, the most widely used programming notations today, although the result of much broadening of these roots, remain close to target system architectures allowing the programmer to almost explicitly use the target's parallelism to the best of their ability. The parallel versions of applications directly or indirectly impact nearly everyone, computer expert or not, and parallelism has brought about major breakthroughs in numerous application areas. Seven parallel applications are studied in this book.

There are no comments for this item.

Log in to your account to post a comment.