Abstract
In this paper, We summarize our experience and first results achieved in the context of advanced research evaluation. Striving for research metrics that effectively allow us to predict real opinions about researchers in a variety of scenarios, we conducted two experiments to understand the respective suitability of common indicators, such as the h-index. We concluded that realistic research evaluation is more complex than assumed by those indicators and, hence, may require the specification of even complex evaluation algorithms. While the reconstruction (or reverse engineering) of those algorithms from publicly available data is one of our research goals, in this paper We show how can we enable users to develop their own algorithms with Reseval, our mashup-based research evaluation platform, and how doing so requires dealing with a variety of data management issues that are specific to the domain of research evaluation. Therefore, we also present the main concepts and model of our data access and management solution, the Scientific Resource Space (SRS).
Original language | English |
---|---|
Pages | 203-214 |
Number of pages | 12 |
Publication status | Published - 2011 |
Externally published | Yes |
Event | 19th Italian Symposium on Advanced Database Systems, SEBD 2011 - Maratea, Italy Duration: 26 Jun 2011 → 29 Jun 2011 |
Conference
Conference | 19th Italian Symposium on Advanced Database Systems, SEBD 2011 |
---|---|
Country/Territory | Italy |
City | Maratea |
Period | 26/06/11 → 29/06/11 |
Keywords
- Reputation
- Research evaluation
- Resource space
- Scientific data access and management