000 04137nam a2200541 i 4500
001 7845173
003 IEEE
005 20220712204859.0
006 m o d
007 cr |n|||||||||
008 170316s2016 mau ob 001 eng d
020 _a9780262337656
_qelectronic bk.
020 _z0262337649
_qelectronic bk.
020 _z9780262035125
_qhardcover
035 _a(CaBNVSL)mat07845173
035 _a(IDAMS)0b00006485bb8271
040 _aCaBNVSL
_beng
_erda
_cCaBNVSL
_dCaBNVSL
041 1 _aeng
_hfre
050 4 _aQ180.55.E9
_bG56 2016eb
100 1 _aGingras, Yves,
_d1954-
_eauthor.
_924958
240 1 0 _aD�erives de l'�evaluation de la recherche.
_lEnglish
245 1 0 _aBibliometrics and research evaluation :
_buses and abuses /
_cYves Gingras.
264 1 _aCambridge, Massachusetts :
_bThe MIT Press,
_c[2016]
264 2 _a[Piscataqay, New Jersey] :
_bIEEE Xplore,
_c[2016]
300 _a1 PDF (xii, 119 pages).
336 _atext
_2rdacontent
337 _aelectronic
_2isbdmedia
338 _aonline resource
_2rdacarrier
490 1 _aHistory and foundations of information science
504 _aIncludes bibliographical references and index.
505 0 _aThe origins of bibliometrics -- What bibliometrics teach us about the dynamics of science -- The proliferation of research evaluations -- The evaluation of research evaluation -- Conclusion: the universities' new clothes.
506 _aRestricted to subscribers or individual electronic text purchasers.
520 3 _a"The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything -- teachers, professors, training programs, universities -- using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics -- aggregate data on publications and citations -- has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy."
530 _aAlso available in print.
538 _aMode of access: World Wide Web
588 0 _aPrint version record.
650 0 _aBibliometrics.
_924384
650 0 _aResearch
_xEvaluation.
_924959
650 0 _aEducation, Higher
_xResearch
_xEvaluation.
_924960
650 0 _aUniversities and colleges
_xResearch
_xEvaluation.
_924961
655 4 _aElectronic books.
_93294
710 2 _aIEEE Xplore (Online Service),
_edistributor.
_924962
710 2 _aMIT Press,
_epublisher.
_924963
776 0 8 _iPrint version:
_aGingras, Yves, 1954-
_sD�erives de l'�evaluation de la recherche. English.
_tBibliometrics and research evaluation.
_dCambridge, Massachusetts : The MIT Press, [2016]
_z9780262035125
_w(DLC) 2016014090
_w(OCoLC)946160420
830 0 _aHistory and foundations of information science.
_922370
856 4 2 _3Abstract with links to resource
_uhttps://ieeexplore.ieee.org/xpl/bkabstractplus.jsp?bkn=7845173
942 _cEBK
999 _c73477
_d73477