000 04049nam a2200493 i 4500
001 7347044
003 IEEE
005 20220712204848.0
006 m o d
007 cr |n|||||||||
008 151223s2015 mau ob 001 eng d
010 _z 2015039693 (print)
020 _a9780262332248
_qelectronic
020 _z9780262528818
_qpaperback : alk. paper
035 _a(CaBNVSL)mat07347044
035 _a(IDAMS)0b00006484b080cd
040 _aCaBNVSL
_beng
_erda
_cCaBNVSL
_dCaBNVSL
050 4 _aQA76.58
_b.P78 2015eb
082 0 0 _a004/.35
_223
245 0 0 _aProgramming models for parallel computing /
_cBalaji, Pavan, ed.
264 1 _aCambridge, Massachusetts :
_bMIT Press,
_c[2015]
264 2 _a[Piscataqay, New Jersey] :
_bIEEE Xplore,
_c[2015]
300 _a1 PDF (488 pages).
336 _atext
_2rdacontent
337 _aelectronic
_2isbdmedia
338 _aonline resource
_2rdacarrier
490 1 _aScientific and engineering computation
504 _aIncludes bibliographical references.
506 1 _aRestricted to subscribers or individual electronic text purchasers.
520 _aWith the coming of the parallel computing era, computer scientists have turned their attention to designing programming models that are suited for high-performance parallel computing and supercomputing systems. Programming parallel systems is complicated by the fact that multiple processing units are simultaneously computing and moving data. This book offers an overview of some of the most prominent parallel programming models used in high-performance computing and supercomputing systems today.The chapters describe the programming models in a unique tutorial style rather than using the formal approach taken in the research literature. The aim is to cover a wide range of parallel programming models, enabling the reader to understand what each has to offer. The book begins with a description of the Message Passing Interface (MPI), the most common parallel programming model for distributed memory computing. It goes on to cover one-sided communication models, ranging from low-level runtime libraries (GASNet, OpenSHMEM) to high-level programming models (UPC, GA, Chapel); task-oriented programming models (Charm++, ADLB, Scioto, Swift, CnC) that allow users to describe their computation and data units as tasks so that the runtime system can manage computation and data movement as necessary; and parallel programming models intended for on-node parallelism in the context of multicore architecture or attached accelerators (OpenMP, Cilk Plus, TBB, CUDA, OpenCL). The book will be a valuable resource for graduate students, researchers, and any scientist who works with data sets and large computations. ContributorsTimothy Armstrong, Michael G. Burke, Ralph Butler, Bradford L. Chamberlain, Sunita Chandrasekaran, Barbara Chapman, Jeff Daily, James Dinan, Deepak Eachempati, Ian T. Foster, William D. Gropp, Paul Hargrove, Wen-mei Hwu, Nikhil Jain, Laxmikant Kale, David Kirk, Kath Knobe, Ariram Krishnamoorthy, Jeffery A. Kuehn, Alexey Kukanov, Charles E. Leiserson, Jonathan Lifflander, Ewing Lusk, Tim Mattson, Bruce Palmer, Steven C. Pieper, Stephen W. Poole, Arch D. Robison, Frank Schlimbach, Rajeev Thakur, Abhinav Vishnu, Justin M. Wozniak, Michael Wilde, Kathy Yelick, Yili Zheng.
530 _aAlso available in print.
538 _aMode of access: World Wide Web
588 _aDescription based on PDF viewed 12/23/2015.
650 0 _aParallel processing (Electronic computers)
_924751
650 0 _aParallel programs (Computer programs)
_924752
655 0 _aElectronic books.
_93294
700 1 _aBalaji, Pavan,
_d1980-
_eeditor.
_924753
710 2 _aIEEE Xplore (Online Service),
_edistributor.
_924754
710 2 _aMIT Press,
_epublisher.
_924755
776 0 8 _iPrint version
_z9780262528818
830 0 _aScientific and engineering computation
_921687
856 4 2 _3Abstract with links to resource
_uhttps://ieeexplore.ieee.org/xpl/bkabstractplus.jsp?bkn=7347044
942 _cEBK
999 _c73445
_d73445