Normal view MARC view ISBD view

Bibliometrics and research evaluation : uses and abuses / Yves Gingras.

By: Gingras, Yves, 1954- [author.].
Contributor(s): IEEE Xplore (Online Service) [distributor.] | MIT Press [publisher.].
Material type: materialTypeLabelBookSeries: History and foundations of information science: Publisher: Cambridge, Massachusetts : The MIT Press, [2016]Distributor: [Piscataqay, New Jersey] : IEEE Xplore, [2016]Description: 1 PDF (xii, 119 pages).Content type: text Media type: electronic Carrier type: online resourceISBN: 9780262337656.Uniform titles: D�erives de l'�evaluation de la recherche. English Subject(s): Bibliometrics | Research -- Evaluation | Education, Higher -- Research -- Evaluation | Universities and colleges -- Research -- EvaluationGenre/Form: Electronic books.Additional physical formats: Print version:: Bibliometrics and research evaluation.Online resources: Abstract with links to resource Also available in print.
Contents:
The origins of bibliometrics -- What bibliometrics teach us about the dynamics of science -- The proliferation of research evaluations -- The evaluation of research evaluation -- Conclusion: the universities' new clothes.
Abstract: "The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything -- teachers, professors, training programs, universities -- using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics -- aggregate data on publications and citations -- has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy."
    average rating: 0.0 (0 votes)
No physical items for this record

Includes bibliographical references and index.

The origins of bibliometrics -- What bibliometrics teach us about the dynamics of science -- The proliferation of research evaluations -- The evaluation of research evaluation -- Conclusion: the universities' new clothes.

Restricted to subscribers or individual electronic text purchasers.

"The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything -- teachers, professors, training programs, universities -- using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics -- aggregate data on publications and citations -- has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy."

Also available in print.

Mode of access: World Wide Web

Print version record.

There are no comments for this item.

Log in to your account to post a comment.