Liquid benchmarks: Benchmarking-as-a-service

Sherif Sakr, Fabio Casati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)


Experimental evaluation and comparison of techniques, algorithms or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task. We demonstrate Liquid Benchmark as a cloud-based service that provides collaborative platforms to simplify the task of peer researchers in performing high quality experimental evaluations and guarantee a transparent scientific crediting process. The service allows building repositories of competing research implementations, sharing testing computing platforms, collaboratively building the specifications of standard benchmarks and allowing end-users to easily create and run testing experiments and share their results.

Original languageEnglish
Title of host publicationJCDL'11 - Proceedings of the 2011 ACM/IEEE Joint Conference on Digital Libraries
Number of pages2
Publication statusPublished - 2011
Externally publishedYes
Event11th Annual International ACM/IEEE Joint Conference on Digital Libraries, JCDL'11 - Ottawa, ON, Canada
Duration: 13 Jun 201117 Jun 2011


Conference11th Annual International ACM/IEEE Joint Conference on Digital Libraries, JCDL'11
CityOttawa, ON


  • benchmarking
  • cloud computing
  • SAAS
  • social web

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'Liquid benchmarks: Benchmarking-as-a-service'. Together they form a unique fingerprint.

Cite this