000 03787nam a22004575i 4500
001 978-3-031-02276-0
003 DE-He213
005 20240730163853.0
007 cr nn 008mamaa
008 220601s2011 sz | s |||| 0|eng d
020 _a9783031022760
_9978-3-031-02276-0
024 7 _a10.1007/978-3-031-02276-0
_2doi
050 4 _aTK5105.5-5105.9
072 7 _aUKN
_2bicssc
072 7 _aCOM043000
_2bisacsh
072 7 _aUKN
_2thema
082 0 4 _a004.6
_223
100 1 _aHarman, Donna.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_980936
245 1 0 _aInformation Retrieval Evaluation
_h[electronic resource] /
_cby Donna Harman.
250 _a1st ed. 2011.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2011.
300 _aXI, 107 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aSynthesis Lectures on Information Concepts, Retrieval, and Services,
_x1947-9468
505 0 _aIntroduction and Early History -- "Batch" Evaluation Since 1992 -- Interactive Evaluation -- Conclusion.
520 _aEvaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. Thesecond chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation/ Conclusion.
650 0 _aComputer networks .
_931572
650 1 4 _aComputer Communication Networks.
_980937
710 2 _aSpringerLink (Online service)
_980938
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031011481
776 0 8 _iPrinted edition:
_z9783031034046
830 0 _aSynthesis Lectures on Information Concepts, Retrieval, and Services,
_x1947-9468
_980939
856 4 0 _uhttps://doi.org/10.1007/978-3-031-02276-0
912 _aZDB-2-SXSC
942 _cEBK
999 _c85071
_d85071