000 04290nam a22005295i 4500
001 978-3-031-02005-6
003 DE-He213
005 20240730163446.0
007 cr nn 008mamaa
008 220601s2011 sz | s |||| 0|eng d
020 _a9783031020056
_9978-3-031-02005-6
024 7 _a10.1007/978-3-031-02005-6
_2doi
050 4 _aQA75.5-76.95
072 7 _aUY
_2bicssc
072 7 _aCOM000000
_2bisacsh
072 7 _aUY
_2thema
082 0 4 _a004
_223
100 1 _aGeorgiou, Chryssis.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_978654
245 1 0 _aCooperative Task-Oriented Computing
_h[electronic resource] :
_bAlgorithms and Complexity /
_cby Chryssis Georgiou, Alexander Shvartsman.
250 _a1st ed. 2011.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2011.
300 _aX, 155 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aSynthesis Lectures on Distributed Computing Theory,
_x2155-1634
505 0 _aIntroduction -- Distributed Cooperation and Adversity -- Paradigms and Techniques -- Shared-Memory Algorithms -- Message-Passing Algorithms -- The Do-All Problem in Other Settings -- Bibliography -- Authors' Biographies.
520 _aCooperative network supercomputing is becoming increasingly popular for harnessing the power of the global Internet computing platform. A typical Internet supercomputer consists of a master computer or server and a large number of computers called workers, performing computation on behalf of the master. Despite the simplicity and benefits of a single master approach, as the scale of such computing environments grows, it becomes unrealistic to assume the existence of the infallible master that is able to coordinate the activities of multitudes of workers. Large-scale distributed systems are inherently dynamic and are subject to perturbations, such as failures of computers and network links, thus it is also necessary to consider fully distributed peer-to-peer solutions. We present a study of cooperative computing with the focus on modeling distributed computing settings, algorithmic techniques enabling one to combine efficiency and fault-tolerance in distributed systems, and the exposition of trade-offs between efficiency and fault-tolerance for robust cooperative computing. The focus of the exposition is on the abstract problem, called Do-All, and formulated in terms of a system of cooperating processors that together need to perform a collection of tasks in the presence of adversity. Our presentation deals with models, algorithmic techniques, and analysis. Our goal is to present the most interesting approaches to algorithm design and analysis leading to many fundamental results in cooperative distributed computing. The algorithms selected for inclusion are among the most efficient that additionally serve as good pedagogical examples. Each chapter concludes with exercises and bibliographic notes that include a wealth of references to related work and relevant advanced results. Table of Contents: Introduction / Distributed Cooperation and Adversity / Paradigms and Techniques / Shared-Memory Algorithms / Message-Passing Algorithms / The Do-All Problem in Other Settings/ Bibliography / Authors' Biographies.
650 0 _aComputer science.
_99832
650 0 _aCoding theory.
_94154
650 0 _aInformation theory.
_914256
650 0 _aData structures (Computer science).
_98188
650 1 4 _aComputer Science.
_99832
650 2 4 _aCoding and Information Theory.
_978655
650 2 4 _aData Structures and Information Theory.
_931923
700 1 _aShvartsman, Alexander.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_978656
710 2 _aSpringerLink (Online service)
_978657
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031008771
776 0 8 _iPrinted edition:
_z9783031031335
830 0 _aSynthesis Lectures on Distributed Computing Theory,
_x2155-1634
_978658
856 4 0 _uhttps://doi.org/10.1007/978-3-031-02005-6
912 _aZDB-2-SXSC
942 _cEBK
999 _c84628
_d84628