Liquid benchmarks

Benchmarking-as-a-service

Sherif Sakr, Fabio Casati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Experimental evaluation and comparison of techniques, algorithms or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task. We demonstrate Liquid Benchmark as a cloud-based service that provides collaborative platforms to simplify the task of peer researchers in performing high quality experimental evaluations and guarantee a transparent scientific crediting process. The service allows building repositories of competing research implementations, sharing testing computing platforms, collaboratively building the specifications of standard benchmarks and allowing end-users to easily create and run testing experiments and share their results.

Original languageEnglish
Title of host publicationJCDL'11 - Proceedings of the 2011 ACM/IEEE Joint Conference on Digital Libraries
Pages451-452
Number of pages2
DOIs
Publication statusPublished - 2011
Externally publishedYes
Event11th Annual International ACM/IEEE Joint Conference on Digital Libraries, JCDL'11 - Ottawa, ON, Canada
Duration: 13 Jun 201117 Jun 2011

Conference

Conference11th Annual International ACM/IEEE Joint Conference on Digital Libraries, JCDL'11
CountryCanada
CityOttawa, ON
Period13.6.1117.6.11

Fingerprint

Benchmarking
Liquids
Testing
Specifications
Experiments

Keywords

  • benchmarking
  • cloud computing
  • SAAS
  • social web

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Sakr, S., & Casati, F. (2011). Liquid benchmarks: Benchmarking-as-a-service. In JCDL'11 - Proceedings of the 2011 ACM/IEEE Joint Conference on Digital Libraries (pp. 451-452) https://doi.org/10.1145/1998076.1998181

Liquid benchmarks : Benchmarking-as-a-service. / Sakr, Sherif; Casati, Fabio.

JCDL'11 - Proceedings of the 2011 ACM/IEEE Joint Conference on Digital Libraries. 2011. p. 451-452.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sakr, S & Casati, F 2011, Liquid benchmarks: Benchmarking-as-a-service. in JCDL'11 - Proceedings of the 2011 ACM/IEEE Joint Conference on Digital Libraries. pp. 451-452, 11th Annual International ACM/IEEE Joint Conference on Digital Libraries, JCDL'11, Ottawa, ON, Canada, 13.6.11. https://doi.org/10.1145/1998076.1998181
Sakr S, Casati F. Liquid benchmarks: Benchmarking-as-a-service. In JCDL'11 - Proceedings of the 2011 ACM/IEEE Joint Conference on Digital Libraries. 2011. p. 451-452 https://doi.org/10.1145/1998076.1998181
Sakr, Sherif ; Casati, Fabio. / Liquid benchmarks : Benchmarking-as-a-service. JCDL'11 - Proceedings of the 2011 ACM/IEEE Joint Conference on Digital Libraries. 2011. pp. 451-452
@inproceedings{02878e4efbea41f0a5c8bb4888a4f316,
title = "Liquid benchmarks: Benchmarking-as-a-service",
abstract = "Experimental evaluation and comparison of techniques, algorithms or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task. We demonstrate Liquid Benchmark as a cloud-based service that provides collaborative platforms to simplify the task of peer researchers in performing high quality experimental evaluations and guarantee a transparent scientific crediting process. The service allows building repositories of competing research implementations, sharing testing computing platforms, collaboratively building the specifications of standard benchmarks and allowing end-users to easily create and run testing experiments and share their results.",
keywords = "benchmarking, cloud computing, SAAS, social web",
author = "Sherif Sakr and Fabio Casati",
year = "2011",
doi = "10.1145/1998076.1998181",
language = "English",
isbn = "9781450307444",
pages = "451--452",
booktitle = "JCDL'11 - Proceedings of the 2011 ACM/IEEE Joint Conference on Digital Libraries",

}

TY - GEN

T1 - Liquid benchmarks

T2 - Benchmarking-as-a-service

AU - Sakr, Sherif

AU - Casati, Fabio

PY - 2011

Y1 - 2011

N2 - Experimental evaluation and comparison of techniques, algorithms or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task. We demonstrate Liquid Benchmark as a cloud-based service that provides collaborative platforms to simplify the task of peer researchers in performing high quality experimental evaluations and guarantee a transparent scientific crediting process. The service allows building repositories of competing research implementations, sharing testing computing platforms, collaboratively building the specifications of standard benchmarks and allowing end-users to easily create and run testing experiments and share their results.

AB - Experimental evaluation and comparison of techniques, algorithms or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task. We demonstrate Liquid Benchmark as a cloud-based service that provides collaborative platforms to simplify the task of peer researchers in performing high quality experimental evaluations and guarantee a transparent scientific crediting process. The service allows building repositories of competing research implementations, sharing testing computing platforms, collaboratively building the specifications of standard benchmarks and allowing end-users to easily create and run testing experiments and share their results.

KW - benchmarking

KW - cloud computing

KW - SAAS

KW - social web

UR - http://www.scopus.com/inward/record.url?scp=79960494836&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79960494836&partnerID=8YFLogxK

U2 - 10.1145/1998076.1998181

DO - 10.1145/1998076.1998181

M3 - Conference contribution

SN - 9781450307444

SP - 451

EP - 452

BT - JCDL'11 - Proceedings of the 2011 ACM/IEEE Joint Conference on Digital Libraries

ER -