000 03716nam a2200529 i 4500
001 6267471
003 IEEE
005 20220712204715.0
006 m o d
007 cr |n|||||||||
008 151228s1991 mau ob 001 eng d
010 _z 91027302 (print)
020 _a9780262288484
_qelectronic
020 _z0262082055
_qprint
020 _z9780262082051
_qprint
035 _a(CaBNVSL)mat06267471
035 _a(IDAMS)0b000064818b44ab
040 _aCaBNVSL
_beng
_erda
_cCaBNVSL
_dCaBNVSL
050 4 _aQA76.5
_b.H42 1991eb
082 0 _a005.2
_220
100 1 _aHatcher, Philip J.,
_eauthor.
_922974
245 1 0 _aData-parallel programming on MIMD computers /
_cPhilip J. Hatcher, Michael J. Quinn.
264 1 _aCambridge, Massachusetts :
_bMIT Press,
_c1991.
264 2 _a[Piscataqay, New Jersey] :
_bIEEE Xplore,
_c[1991]
300 _a1 PDF (250 pages).
336 _atext
_2rdacontent
337 _aelectronic
_2isbdmedia
338 _aonline resource
_2rdacarrier
490 1 _aScientific and engineering computation
504 _aIncludes bibliographical references and index.
506 1 _aRestricted to subscribers or individual electronic text purchasers.
520 _aMIMD computers are notoriously difficult to program. Data-Parallel Programming demonstrates that architecture-independent parallel programming is possible by describing in detail how programs written in a high-level SIMD programming language may be compiled and efficiently executed-on both shared-memory multiprocessors and distributed-memory multicomputers.The authors provide enough data so that the reader can decide the feasibility of architecture-independent programming in a data-parallel language. For each benchmark program they give the source code listing, absolute execution time on both a multiprocessor and a multicomputer, and a speedup relative to a sequential program. And they often present multiple solutions to the same problem, to better illustrate the strengths and weaknesses of these compilers.The language presented is Dataparallel C, a variant of the original C* language developed by Thinking Machines Corporation for its Connection Machine processor array. Separate chapters describe the compilation of Dataparallel C programs for execution on the Sequent multiprocessor and the Intel and nCUBE hypercubes, respectively. The authors document the performance of these compilers on a variety of benchmark programs and present several case studies.Philip J. Hatcher is Assistant Professor in the Department of Computer Science at the University of New Hampshire. Michael J. Quinn is Associate Professor of Computer Science at Oregon State University.Contents: Introduction. Dataparallel C Programming Language Description. Design of a Multicomputer Dataparallel C Compiler. Design of a Multiprocessor Dataparallel C Compiler. Writing Efficient Programs. Benchmarking the Compilers. Case Studies. Conclusions.
530 _aAlso available in print.
538 _aMode of access: World Wide Web
588 _aDescription based on PDF viewed 12/28/2015.
650 0 _aParallel programming (Computer science)
_96675
650 0 _aMIMD computers
_xProgramming.
_922975
650 0 _aC (Computer program language)
_93828
655 0 _aElectronic books.
_93294
700 1 _aQuinn, Michael J.
_q(Michael Jay)
_922976
710 2 _aIEEE Xplore (Online Service),
_edistributor.
_922977
710 2 _aMIT Press,
_epublisher.
_922978
776 0 8 _iPrint version
_z9780262082051
830 0 _aScientific and engineering computation
_921687
856 4 2 _3Abstract with links to resource
_uhttps://ieeexplore.ieee.org/xpl/bkabstractplus.jsp?bkn=6267471
942 _cEBK
999 _c73125
_d73125