login/register

Snip!t from collection of Alan Dix

see all channels for Alan Dix

Snip
summary

In many EU countries there is a requirement to count res... measure and prove its value. These numbers, often produc... based on the impact of journals, are used to rank univer... determine fund distribution, to evaluate research propos... determine t

Counting Research ⇒ Directing Research: The Hazard of Using Simple Metrics to Evaluate Scientific Contributions (An EU Experience)
http://quod.lib.umich.edu/j/jep/3336451.0020.102?view=text;rgn=main

Categories

/Channels/research methods/metrics and altmetrics

[ go to category ]

For Snip

loading snip actions ...

For Page

loading url actions ...

Full snip

In many EU countries there is a requirement to count research, i.e., to measure and prove its value. These numbers, often produced automatically based on the impact of journals, are used to rank universities, to determine fund distribution, to evaluate research proposals, and to determine the scientific merit of each researcher. While the real value of research may be difficult to measure, one avoids this problem by counting papers and citations in well-known journals. That is, the measured impact of a paper (and the scientific contribution) is defined to be equal to the impact of the journal that publishes it. The journal impact (and its scientific value) is then based on the references to papers in this journal. This ignores the fact that there may be huge differences between papers in the same journal; that there are significant discrepancies between impact values of different scientific areas; that research results may be offered outside the journals; and that citations may not be a good index for value. Since research is a collaborative activity, it may also be difficult to measure the contributions of each individual scientist. However, the real danger is not that the contributions may be counted wrongly, but that the measuring systems will also have a strong influence on the way we perform research.