A scientific resource space for advanced research evaluation scenarios

Cristhian Parra, Muhammad Imran, Daniil Mirylenka, Florian Daniel, Fabio Casati, Maurizio Marchese

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

In this paper, We summarize our experience and first results achieved in the context of advanced research evaluation. Striving for research metrics that effectively allow us to predict real opinions about researchers in a variety of scenarios, we conducted two experiments to understand the respective suitability of common indicators, such as the h-index. We concluded that realistic research evaluation is more complex than assumed by those indicators and, hence, may require the specification of even complex evaluation algorithms. While the reconstruction (or reverse engineering) of those algorithms from publicly available data is one of our research goals, in this paper We show how can we enable users to develop their own algorithms with Reseval, our mashup-based research evaluation platform, and how doing so requires dealing with a variety of data management issues that are specific to the domain of research evaluation. Therefore, we also present the main concepts and model of our data access and management solution, the Scientific Resource Space (SRS).

Original languageEnglish
Title of host publicationSEBD 2011 - Proceedings of the 19th Italian Symposium on Advanced Database Systems
Pages203-214
Number of pages12
Publication statusPublished - 2011
Externally publishedYes
Event19th Italian Symposium on Advanced Database Systems, SEBD 2011 - Maratea, Italy
Duration: 26 Jun 201129 Jun 2011

Conference

Conference19th Italian Symposium on Advanced Database Systems, SEBD 2011
CountryItaly
CityMaratea
Period26.6.1129.6.11

Fingerprint

Reverse engineering
Information management
Specifications
Experiments

Keywords

  • Reputation
  • Research evaluation
  • Resource space
  • Scientific data access and management

ASJC Scopus subject areas

  • Information Systems

Cite this

Parra, C., Imran, M., Mirylenka, D., Daniel, F., Casati, F., & Marchese, M. (2011). A scientific resource space for advanced research evaluation scenarios. In SEBD 2011 - Proceedings of the 19th Italian Symposium on Advanced Database Systems (pp. 203-214)

A scientific resource space for advanced research evaluation scenarios. / Parra, Cristhian; Imran, Muhammad; Mirylenka, Daniil; Daniel, Florian; Casati, Fabio; Marchese, Maurizio.

SEBD 2011 - Proceedings of the 19th Italian Symposium on Advanced Database Systems. 2011. p. 203-214.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Parra, C, Imran, M, Mirylenka, D, Daniel, F, Casati, F & Marchese, M 2011, A scientific resource space for advanced research evaluation scenarios. in SEBD 2011 - Proceedings of the 19th Italian Symposium on Advanced Database Systems. pp. 203-214, 19th Italian Symposium on Advanced Database Systems, SEBD 2011, Maratea, Italy, 26.6.11.
Parra C, Imran M, Mirylenka D, Daniel F, Casati F, Marchese M. A scientific resource space for advanced research evaluation scenarios. In SEBD 2011 - Proceedings of the 19th Italian Symposium on Advanced Database Systems. 2011. p. 203-214
Parra, Cristhian ; Imran, Muhammad ; Mirylenka, Daniil ; Daniel, Florian ; Casati, Fabio ; Marchese, Maurizio. / A scientific resource space for advanced research evaluation scenarios. SEBD 2011 - Proceedings of the 19th Italian Symposium on Advanced Database Systems. 2011. pp. 203-214
@inproceedings{11d1c02d8557407a9ce1d8639409de3c,
title = "A scientific resource space for advanced research evaluation scenarios",
abstract = "In this paper, We summarize our experience and first results achieved in the context of advanced research evaluation. Striving for research metrics that effectively allow us to predict real opinions about researchers in a variety of scenarios, we conducted two experiments to understand the respective suitability of common indicators, such as the h-index. We concluded that realistic research evaluation is more complex than assumed by those indicators and, hence, may require the specification of even complex evaluation algorithms. While the reconstruction (or reverse engineering) of those algorithms from publicly available data is one of our research goals, in this paper We show how can we enable users to develop their own algorithms with Reseval, our mashup-based research evaluation platform, and how doing so requires dealing with a variety of data management issues that are specific to the domain of research evaluation. Therefore, we also present the main concepts and model of our data access and management solution, the Scientific Resource Space (SRS).",
keywords = "Reputation, Research evaluation, Resource space, Scientific data access and management",
author = "Cristhian Parra and Muhammad Imran and Daniil Mirylenka and Florian Daniel and Fabio Casati and Maurizio Marchese",
year = "2011",
language = "English",
pages = "203--214",
booktitle = "SEBD 2011 - Proceedings of the 19th Italian Symposium on Advanced Database Systems",

}

TY - GEN

T1 - A scientific resource space for advanced research evaluation scenarios

AU - Parra, Cristhian

AU - Imran, Muhammad

AU - Mirylenka, Daniil

AU - Daniel, Florian

AU - Casati, Fabio

AU - Marchese, Maurizio

PY - 2011

Y1 - 2011

N2 - In this paper, We summarize our experience and first results achieved in the context of advanced research evaluation. Striving for research metrics that effectively allow us to predict real opinions about researchers in a variety of scenarios, we conducted two experiments to understand the respective suitability of common indicators, such as the h-index. We concluded that realistic research evaluation is more complex than assumed by those indicators and, hence, may require the specification of even complex evaluation algorithms. While the reconstruction (or reverse engineering) of those algorithms from publicly available data is one of our research goals, in this paper We show how can we enable users to develop their own algorithms with Reseval, our mashup-based research evaluation platform, and how doing so requires dealing with a variety of data management issues that are specific to the domain of research evaluation. Therefore, we also present the main concepts and model of our data access and management solution, the Scientific Resource Space (SRS).

AB - In this paper, We summarize our experience and first results achieved in the context of advanced research evaluation. Striving for research metrics that effectively allow us to predict real opinions about researchers in a variety of scenarios, we conducted two experiments to understand the respective suitability of common indicators, such as the h-index. We concluded that realistic research evaluation is more complex than assumed by those indicators and, hence, may require the specification of even complex evaluation algorithms. While the reconstruction (or reverse engineering) of those algorithms from publicly available data is one of our research goals, in this paper We show how can we enable users to develop their own algorithms with Reseval, our mashup-based research evaluation platform, and how doing so requires dealing with a variety of data management issues that are specific to the domain of research evaluation. Therefore, we also present the main concepts and model of our data access and management solution, the Scientific Resource Space (SRS).

KW - Reputation

KW - Research evaluation

KW - Resource space

KW - Scientific data access and management

UR - http://www.scopus.com/inward/record.url?scp=84873654064&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84873654064&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84873654064

SP - 203

EP - 214

BT - SEBD 2011 - Proceedings of the 19th Italian Symposium on Advanced Database Systems

ER -