000 03684nam a22005055i 4500
001 978-3-031-01743-8
003 DE-He213
005 20240730163710.0
007 cr nn 008mamaa
008 220601s2014 sz | s |||| 0|eng d
020 _a9783031017438
_9978-3-031-01743-8
024 7 _a10.1007/978-3-031-01743-8
_2doi
050 4 _aTK7867-7867.5
072 7 _aTJFC
_2bicssc
072 7 _aTEC008010
_2bisacsh
072 7 _aTJFC
_2thema
082 0 4 _a621.3815
_223
100 1 _aFalsafi, Babak.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_980022
245 1 2 _aA Primer on Hardware Prefetching
_h[electronic resource] /
_cby Babak Falsafi, Thomas F. Wenisch.
250 _a1st ed. 2014.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2014.
300 _aXIV, 54 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aSynthesis Lectures on Computer Architecture,
_x1935-3243
505 0 _aPreface -- Introduction -- Instruction Prefetching -- Data Prefetching -- Concluding Remarks -- Bibliography -- Author Biographies .
520 _aSince the 1970's, microprocessor-based digital platforms have been riding Moore's law, allowing for doubling of density for the same area roughly every two years. However, whereas microprocessor fabrication has focused on increasing instruction execution rate, memory fabrication technologies have focused primarily on an increase in capacity with negligible increase in speed. This divergent trend in performance between the processors and memory has led to a phenomenon referred to as the "Memory Wall." To overcome the memory wall, designers have resorted to a hierarchy of cache memory levels, which rely on the principal of memory access locality to reduce the observed memory access time and the performance gap between processors and memory. Unfortunately, important workload classes exhibit adverse memory access patterns that baffle the simple policies built into modern cache hierarchies to move instructions and data across cache levels. As such, processors often spend much time idling upon a demand fetch of memory blocks that miss in higher cache levels. Prefetching-predicting future memory accesses and issuing requests for the corresponding memory blocks in advance of explicit accesses-is an effective approach to hide memory access latency. There have been a myriad of proposed prefetching techniques, and nearly every modern processor includes some hardware prefetching mechanisms targeting simple and regular memory access patterns. This primer offers an overview of the various classes of hardware prefetchers for instructions and data proposed in the research literature, and presents examples of techniques incorporated into modern microprocessors.
650 0 _aElectronic circuits.
_919581
650 0 _aMicroprocessors.
_980023
650 0 _aComputer architecture.
_93513
650 1 4 _aElectronic Circuits and Systems.
_980024
650 2 4 _aProcessor Architectures.
_980025
700 1 _aWenisch, Thomas F.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_980026
710 2 _aSpringerLink (Online service)
_980027
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031006159
776 0 8 _iPrinted edition:
_z9783031028717
830 0 _aSynthesis Lectures on Computer Architecture,
_x1935-3243
_980028
856 4 0 _uhttps://doi.org/10.1007/978-3-031-01743-8
912 _aZDB-2-SXSC
942 _cEBK
999 _c84889
_d84889