Reverse-engineering conference rankings

What does it take to make a reputable conference?

Peep Küngas, Siim Karus, Svitlana Vakulenko, Marlon Dumas, Cristhian Parra, Fabio Casati

Research output: Contribution to journalArticle

6 Citations (Scopus)

Abstract

In recent years, several national and community-driven conference rankings have been compiled. These rankings are often taken as indicators of reputation and used for a variety of purposes, such as evaluating the performance of academic institutions and individual scientists, or selecting target conferences for paper submissions. Current rankings are based on a combination of objective criteria and subjective opinions that are collated and reviewed through largely manual processes. In this setting, the aim of this paper is to shed light into the following question: to what extent existing conference rankings reflect objective criteria, specifically submission and acceptance statistics and bibliometric indicators? The paper specifically considers three conference rankings in the field of Computer Science: an Australian national ranking, a Brazilian national ranking and an informal community-built ranking. It is found that in all cases bibliometric indicators are the most important determinants of rank. It is also found that in all rankings, top-tier conferences can be identified with relatively high accuracy through acceptance rates and bibliometric indicators. On the other hand, acceptance rates and bibliometric indicators fail to discriminate between mid-tier and bottom-tier conferences.

Original languageEnglish
Pages (from-to)651-665
Number of pages15
JournalScientometrics
Volume96
Issue number2
DOIs
Publication statusPublished - 2013
Externally publishedYes

Fingerprint

Reverse engineering
Computer science
ranking
Statistics
engineering
acceptance
computer science
reputation
community
statistics
determinants

Keywords

  • Bibliometrics
  • Citation counts
  • Computer science
  • Conference acceptance rate
  • Conference rankings
  • Objective criteria
  • Publication counts

ASJC Scopus subject areas

  • Social Sciences(all)
  • Computer Science Applications
  • Library and Information Sciences
  • Law

Cite this

Reverse-engineering conference rankings : What does it take to make a reputable conference? / Küngas, Peep; Karus, Siim; Vakulenko, Svitlana; Dumas, Marlon; Parra, Cristhian; Casati, Fabio.

In: Scientometrics, Vol. 96, No. 2, 2013, p. 651-665.

Research output: Contribution to journalArticle

Küngas, Peep ; Karus, Siim ; Vakulenko, Svitlana ; Dumas, Marlon ; Parra, Cristhian ; Casati, Fabio. / Reverse-engineering conference rankings : What does it take to make a reputable conference?. In: Scientometrics. 2013 ; Vol. 96, No. 2. pp. 651-665.
@article{e3c8769c6a97486e996082a5909f8f06,
title = "Reverse-engineering conference rankings: What does it take to make a reputable conference?",
abstract = "In recent years, several national and community-driven conference rankings have been compiled. These rankings are often taken as indicators of reputation and used for a variety of purposes, such as evaluating the performance of academic institutions and individual scientists, or selecting target conferences for paper submissions. Current rankings are based on a combination of objective criteria and subjective opinions that are collated and reviewed through largely manual processes. In this setting, the aim of this paper is to shed light into the following question: to what extent existing conference rankings reflect objective criteria, specifically submission and acceptance statistics and bibliometric indicators? The paper specifically considers three conference rankings in the field of Computer Science: an Australian national ranking, a Brazilian national ranking and an informal community-built ranking. It is found that in all cases bibliometric indicators are the most important determinants of rank. It is also found that in all rankings, top-tier conferences can be identified with relatively high accuracy through acceptance rates and bibliometric indicators. On the other hand, acceptance rates and bibliometric indicators fail to discriminate between mid-tier and bottom-tier conferences.",
keywords = "Bibliometrics, Citation counts, Computer science, Conference acceptance rate, Conference rankings, Objective criteria, Publication counts",
author = "Peep K{\"u}ngas and Siim Karus and Svitlana Vakulenko and Marlon Dumas and Cristhian Parra and Fabio Casati",
year = "2013",
doi = "10.1007/s11192-012-0938-8",
language = "English",
volume = "96",
pages = "651--665",
journal = "Scientometrics",
issn = "0138-9130",
publisher = "Springer Netherlands",
number = "2",

}

TY - JOUR

T1 - Reverse-engineering conference rankings

T2 - What does it take to make a reputable conference?

AU - Küngas, Peep

AU - Karus, Siim

AU - Vakulenko, Svitlana

AU - Dumas, Marlon

AU - Parra, Cristhian

AU - Casati, Fabio

PY - 2013

Y1 - 2013

N2 - In recent years, several national and community-driven conference rankings have been compiled. These rankings are often taken as indicators of reputation and used for a variety of purposes, such as evaluating the performance of academic institutions and individual scientists, or selecting target conferences for paper submissions. Current rankings are based on a combination of objective criteria and subjective opinions that are collated and reviewed through largely manual processes. In this setting, the aim of this paper is to shed light into the following question: to what extent existing conference rankings reflect objective criteria, specifically submission and acceptance statistics and bibliometric indicators? The paper specifically considers three conference rankings in the field of Computer Science: an Australian national ranking, a Brazilian national ranking and an informal community-built ranking. It is found that in all cases bibliometric indicators are the most important determinants of rank. It is also found that in all rankings, top-tier conferences can be identified with relatively high accuracy through acceptance rates and bibliometric indicators. On the other hand, acceptance rates and bibliometric indicators fail to discriminate between mid-tier and bottom-tier conferences.

AB - In recent years, several national and community-driven conference rankings have been compiled. These rankings are often taken as indicators of reputation and used for a variety of purposes, such as evaluating the performance of academic institutions and individual scientists, or selecting target conferences for paper submissions. Current rankings are based on a combination of objective criteria and subjective opinions that are collated and reviewed through largely manual processes. In this setting, the aim of this paper is to shed light into the following question: to what extent existing conference rankings reflect objective criteria, specifically submission and acceptance statistics and bibliometric indicators? The paper specifically considers three conference rankings in the field of Computer Science: an Australian national ranking, a Brazilian national ranking and an informal community-built ranking. It is found that in all cases bibliometric indicators are the most important determinants of rank. It is also found that in all rankings, top-tier conferences can be identified with relatively high accuracy through acceptance rates and bibliometric indicators. On the other hand, acceptance rates and bibliometric indicators fail to discriminate between mid-tier and bottom-tier conferences.

KW - Bibliometrics

KW - Citation counts

KW - Computer science

KW - Conference acceptance rate

KW - Conference rankings

KW - Objective criteria

KW - Publication counts

UR - http://www.scopus.com/inward/record.url?scp=84880137231&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84880137231&partnerID=8YFLogxK

U2 - 10.1007/s11192-012-0938-8

DO - 10.1007/s11192-012-0938-8

M3 - Article

VL - 96

SP - 651

EP - 665

JO - Scientometrics

JF - Scientometrics

SN - 0138-9130

IS - 2

ER -