Abstract
Experimental evaluation and comparison of techniques, algorithms or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task. We demonstrate Liquid Benchmark as a cloud-based service that provides collaborative platforms to simplify the task of peer researchers in performing high quality experimental evaluations and guarantee a transparent scientific crediting process. The service allows building repositories of competing research implementations, sharing testing computing platforms, collaboratively building the specifications of standard benchmarks and allowing end-users to easily create and run testing experiments and share their results.
Original language | English |
---|---|
Title of host publication | JCDL'11 - Proceedings of the 2011 ACM/IEEE Joint Conference on Digital Libraries |
Pages | 451-452 |
Number of pages | 2 |
DOIs | |
Publication status | Published - 2011 |
Externally published | Yes |
Event | 11th Annual International ACM/IEEE Joint Conference on Digital Libraries, JCDL'11 - Ottawa, ON, Canada Duration: 13 Jun 2011 → 17 Jun 2011 |
Conference
Conference | 11th Annual International ACM/IEEE Joint Conference on Digital Libraries, JCDL'11 |
---|---|
Country | Canada |
City | Ottawa, ON |
Period | 13.6.11 → 17.6.11 |
Keywords
- benchmarking
- cloud computing
- SAAS
- social web
ASJC Scopus subject areas
- Engineering(all)