On peer review in computer science

Analysis of its effectiveness and suggestions for improvement

Azzurra Ragone, Katsiaryna Mirylenka, Fabio Casati, Maurizio Marchese

Research output: Contribution to journalReview article

12 Citations (Scopus)

Abstract

In this paper we focus on the analysis of peer reviews and reviewers behaviour in a number of different review processes. More specifically, we report on the development, definition and rationale of a theoretical model for peer review processes to support the identification of appropriate metrics to assess the processes main characteristics in order to render peer review more transparent and understandable. Together with known metrics and techniques we introduce new ones to assess the overall quality (i.e., reliability, fairness, validity) and efficiency of peer review processes e.g. the robustness of the process, the degree of agreement/disagreement among reviewers, or positive/negative bias in the reviewers' decision making process. We also check the ability of peer review to assess the impact of papers in subsequent years. We apply the proposed model and analysis framework to a large reviews data set from ten different conferences in computer science for a total of ca. 9,000 reviews on ca. 2,800 submitted contributions. We discuss the implications of the results and their potential use toward improving the analysed peer review processes. A number of interesting results were found, in particular: (1) a low correlation between peer review outcome and impact in time of the accepted contributions; (2) the influence of the assessment scale on the way how reviewers gave marks; (3) the effect and impact of rating bias, i.e. reviewers who constantly give lower/higher marks w.r.t. all other reviewers; (4) the effectiveness of statistical approaches to optimize some process parameters (e.g., number of papers per reviewer) to improve the process overall quality while maintaining the overall effort under control. Based on the lessons learned, we suggest ways to improve the overall quality of peer-review through procedures that can be easily implemented in current editorial management systems.

Original languageEnglish
Pages (from-to)317-356
Number of pages40
JournalScientometrics
Volume97
Issue number2
DOIs
Publication statusPublished - 2013
Externally publishedYes

Fingerprint

peer review
computer science
Computer science
decision making process
trend
fairness
rating
efficiency
Identification (control systems)
Decision making
ability
management

Keywords

  • Efficiency
  • Fairness
  • Peer review
  • Quality metrics
  • Reliability
  • Validity

ASJC Scopus subject areas

  • Social Sciences(all)
  • Computer Science Applications
  • Library and Information Sciences
  • Law

Cite this

On peer review in computer science : Analysis of its effectiveness and suggestions for improvement. / Ragone, Azzurra; Mirylenka, Katsiaryna; Casati, Fabio; Marchese, Maurizio.

In: Scientometrics, Vol. 97, No. 2, 2013, p. 317-356.

Research output: Contribution to journalReview article

Ragone, Azzurra ; Mirylenka, Katsiaryna ; Casati, Fabio ; Marchese, Maurizio. / On peer review in computer science : Analysis of its effectiveness and suggestions for improvement. In: Scientometrics. 2013 ; Vol. 97, No. 2. pp. 317-356.
@article{b3b8c372272444fcb2ae3887cbc74a36,
title = "On peer review in computer science: Analysis of its effectiveness and suggestions for improvement",
abstract = "In this paper we focus on the analysis of peer reviews and reviewers behaviour in a number of different review processes. More specifically, we report on the development, definition and rationale of a theoretical model for peer review processes to support the identification of appropriate metrics to assess the processes main characteristics in order to render peer review more transparent and understandable. Together with known metrics and techniques we introduce new ones to assess the overall quality (i.e., reliability, fairness, validity) and efficiency of peer review processes e.g. the robustness of the process, the degree of agreement/disagreement among reviewers, or positive/negative bias in the reviewers' decision making process. We also check the ability of peer review to assess the impact of papers in subsequent years. We apply the proposed model and analysis framework to a large reviews data set from ten different conferences in computer science for a total of ca. 9,000 reviews on ca. 2,800 submitted contributions. We discuss the implications of the results and their potential use toward improving the analysed peer review processes. A number of interesting results were found, in particular: (1) a low correlation between peer review outcome and impact in time of the accepted contributions; (2) the influence of the assessment scale on the way how reviewers gave marks; (3) the effect and impact of rating bias, i.e. reviewers who constantly give lower/higher marks w.r.t. all other reviewers; (4) the effectiveness of statistical approaches to optimize some process parameters (e.g., number of papers per reviewer) to improve the process overall quality while maintaining the overall effort under control. Based on the lessons learned, we suggest ways to improve the overall quality of peer-review through procedures that can be easily implemented in current editorial management systems.",
keywords = "Efficiency, Fairness, Peer review, Quality metrics, Reliability, Validity",
author = "Azzurra Ragone and Katsiaryna Mirylenka and Fabio Casati and Maurizio Marchese",
year = "2013",
doi = "10.1007/s11192-013-1002-z",
language = "English",
volume = "97",
pages = "317--356",
journal = "Scientometrics",
issn = "0138-9130",
publisher = "Springer Netherlands",
number = "2",

}

TY - JOUR

T1 - On peer review in computer science

T2 - Analysis of its effectiveness and suggestions for improvement

AU - Ragone, Azzurra

AU - Mirylenka, Katsiaryna

AU - Casati, Fabio

AU - Marchese, Maurizio

PY - 2013

Y1 - 2013

N2 - In this paper we focus on the analysis of peer reviews and reviewers behaviour in a number of different review processes. More specifically, we report on the development, definition and rationale of a theoretical model for peer review processes to support the identification of appropriate metrics to assess the processes main characteristics in order to render peer review more transparent and understandable. Together with known metrics and techniques we introduce new ones to assess the overall quality (i.e., reliability, fairness, validity) and efficiency of peer review processes e.g. the robustness of the process, the degree of agreement/disagreement among reviewers, or positive/negative bias in the reviewers' decision making process. We also check the ability of peer review to assess the impact of papers in subsequent years. We apply the proposed model and analysis framework to a large reviews data set from ten different conferences in computer science for a total of ca. 9,000 reviews on ca. 2,800 submitted contributions. We discuss the implications of the results and their potential use toward improving the analysed peer review processes. A number of interesting results were found, in particular: (1) a low correlation between peer review outcome and impact in time of the accepted contributions; (2) the influence of the assessment scale on the way how reviewers gave marks; (3) the effect and impact of rating bias, i.e. reviewers who constantly give lower/higher marks w.r.t. all other reviewers; (4) the effectiveness of statistical approaches to optimize some process parameters (e.g., number of papers per reviewer) to improve the process overall quality while maintaining the overall effort under control. Based on the lessons learned, we suggest ways to improve the overall quality of peer-review through procedures that can be easily implemented in current editorial management systems.

AB - In this paper we focus on the analysis of peer reviews and reviewers behaviour in a number of different review processes. More specifically, we report on the development, definition and rationale of a theoretical model for peer review processes to support the identification of appropriate metrics to assess the processes main characteristics in order to render peer review more transparent and understandable. Together with known metrics and techniques we introduce new ones to assess the overall quality (i.e., reliability, fairness, validity) and efficiency of peer review processes e.g. the robustness of the process, the degree of agreement/disagreement among reviewers, or positive/negative bias in the reviewers' decision making process. We also check the ability of peer review to assess the impact of papers in subsequent years. We apply the proposed model and analysis framework to a large reviews data set from ten different conferences in computer science for a total of ca. 9,000 reviews on ca. 2,800 submitted contributions. We discuss the implications of the results and their potential use toward improving the analysed peer review processes. A number of interesting results were found, in particular: (1) a low correlation between peer review outcome and impact in time of the accepted contributions; (2) the influence of the assessment scale on the way how reviewers gave marks; (3) the effect and impact of rating bias, i.e. reviewers who constantly give lower/higher marks w.r.t. all other reviewers; (4) the effectiveness of statistical approaches to optimize some process parameters (e.g., number of papers per reviewer) to improve the process overall quality while maintaining the overall effort under control. Based on the lessons learned, we suggest ways to improve the overall quality of peer-review through procedures that can be easily implemented in current editorial management systems.

KW - Efficiency

KW - Fairness

KW - Peer review

KW - Quality metrics

KW - Reliability

KW - Validity

UR - http://www.scopus.com/inward/record.url?scp=84885864323&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84885864323&partnerID=8YFLogxK

U2 - 10.1007/s11192-013-1002-z

DO - 10.1007/s11192-013-1002-z

M3 - Review article

VL - 97

SP - 317

EP - 356

JO - Scientometrics

JF - Scientometrics

SN - 0138-9130

IS - 2

ER -