Liquid benchmarks

Towards an online platform for collaborative assessment of computer science research results

Sherif Sakr, Fabio Casati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Experimental evaluation and comparison of techniques, algorithms, approaches or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task in addition to its requirement to be periodically repeated to maintain its freshness and being up-to-date. In this paper, we coin the notion of Liquid Benchmarks as online and public services that provide collaborative platforms to unify efforts of peer researchers from all over the world to simplify their task in performing high quality experimental evaluations and guarantee a transparent scientific crediting process.

Original languageEnglish
Title of host publicationPerformance Evaluation, Measurement and Characterization of Complex Systems - Second TPC Technology Conference, TPCTC 2010, Revised Selected Papers
Pages10-24
Number of pages15
Volume6417 LNCS
DOIs
Publication statusPublished - 2011
Externally publishedYes
Event2nd TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2010, Held in Conjunction with the 36th International Conference on Very Large Data Bases, VLDB 2010 - Singapore, Singapore
Duration: 13 Sep 201017 Sep 2010

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume6417 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference2nd TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2010, Held in Conjunction with the 36th International Conference on Very Large Data Bases, VLDB 2010
CountrySingapore
CitySingapore
Period13.9.1017.9.10

Fingerprint

Experimental Evaluation
Computer science
Computer Science
Liquid
Benchmark
Public Services
Resources
Quality Evaluation
Requirements
Liquids
Shortage
Simplify
Computing
Alternatives
Experimental Results
Standards

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Sakr, S., & Casati, F. (2011). Liquid benchmarks: Towards an online platform for collaborative assessment of computer science research results. In Performance Evaluation, Measurement and Characterization of Complex Systems - Second TPC Technology Conference, TPCTC 2010, Revised Selected Papers (Vol. 6417 LNCS, pp. 10-24). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 6417 LNCS). https://doi.org/10.1007/978-3-642-18206-8_2

Liquid benchmarks : Towards an online platform for collaborative assessment of computer science research results. / Sakr, Sherif; Casati, Fabio.

Performance Evaluation, Measurement and Characterization of Complex Systems - Second TPC Technology Conference, TPCTC 2010, Revised Selected Papers. Vol. 6417 LNCS 2011. p. 10-24 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 6417 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sakr, S & Casati, F 2011, Liquid benchmarks: Towards an online platform for collaborative assessment of computer science research results. in Performance Evaluation, Measurement and Characterization of Complex Systems - Second TPC Technology Conference, TPCTC 2010, Revised Selected Papers. vol. 6417 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 6417 LNCS, pp. 10-24, 2nd TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2010, Held in Conjunction with the 36th International Conference on Very Large Data Bases, VLDB 2010, Singapore, Singapore, 13.9.10. https://doi.org/10.1007/978-3-642-18206-8_2
Sakr S, Casati F. Liquid benchmarks: Towards an online platform for collaborative assessment of computer science research results. In Performance Evaluation, Measurement and Characterization of Complex Systems - Second TPC Technology Conference, TPCTC 2010, Revised Selected Papers. Vol. 6417 LNCS. 2011. p. 10-24. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-642-18206-8_2
Sakr, Sherif ; Casati, Fabio. / Liquid benchmarks : Towards an online platform for collaborative assessment of computer science research results. Performance Evaluation, Measurement and Characterization of Complex Systems - Second TPC Technology Conference, TPCTC 2010, Revised Selected Papers. Vol. 6417 LNCS 2011. pp. 10-24 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{6061539773fe4a3a87b6426f2c6ba9e3,
title = "Liquid benchmarks: Towards an online platform for collaborative assessment of computer science research results",
abstract = "Experimental evaluation and comparison of techniques, algorithms, approaches or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task in addition to its requirement to be periodically repeated to maintain its freshness and being up-to-date. In this paper, we coin the notion of Liquid Benchmarks as online and public services that provide collaborative platforms to unify efforts of peer researchers from all over the world to simplify their task in performing high quality experimental evaluations and guarantee a transparent scientific crediting process.",
author = "Sherif Sakr and Fabio Casati",
year = "2011",
doi = "10.1007/978-3-642-18206-8_2",
language = "English",
isbn = "9783642182051",
volume = "6417 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "10--24",
booktitle = "Performance Evaluation, Measurement and Characterization of Complex Systems - Second TPC Technology Conference, TPCTC 2010, Revised Selected Papers",

}

TY - GEN

T1 - Liquid benchmarks

T2 - Towards an online platform for collaborative assessment of computer science research results

AU - Sakr, Sherif

AU - Casati, Fabio

PY - 2011

Y1 - 2011

N2 - Experimental evaluation and comparison of techniques, algorithms, approaches or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task in addition to its requirement to be periodically repeated to maintain its freshness and being up-to-date. In this paper, we coin the notion of Liquid Benchmarks as online and public services that provide collaborative platforms to unify efforts of peer researchers from all over the world to simplify their task in performing high quality experimental evaluations and guarantee a transparent scientific crediting process.

AB - Experimental evaluation and comparison of techniques, algorithms, approaches or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task in addition to its requirement to be periodically repeated to maintain its freshness and being up-to-date. In this paper, we coin the notion of Liquid Benchmarks as online and public services that provide collaborative platforms to unify efforts of peer researchers from all over the world to simplify their task in performing high quality experimental evaluations and guarantee a transparent scientific crediting process.

UR - http://www.scopus.com/inward/record.url?scp=79952019757&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79952019757&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-18206-8_2

DO - 10.1007/978-3-642-18206-8_2

M3 - Conference contribution

SN - 9783642182051

VL - 6417 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 10

EP - 24

BT - Performance Evaluation, Measurement and Characterization of Complex Systems - Second TPC Technology Conference, TPCTC 2010, Revised Selected Papers

ER -