Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2008;3(10):e3480.
doi: 10.1371/journal.pone.0003480. Epub 2008 Oct 22.

Does the committee peer review select the best applicants for funding? An investigation of the selection process for two European molecular biology organization programmes

Affiliations

Does the committee peer review select the best applicants for funding? An investigation of the selection process for two European molecular biology organization programmes

Lutz Bornmann et al. PLoS One. 2008.

Abstract

Does peer review fulfill its declared objective of identifying the best science and the best scientists? In order to answer this question we analyzed the Long-Term Fellowship and the Young Investigator programmes of the European Molecular Biology Organization. Both programmes aim to identify and support the best post doctoral fellows and young group leaders in the life sciences. We checked the association between the selection decisions and the scientific performance of the applicants. Our study involved publication and citation data for 668 applicants to the Long-Term Fellowship programme from the year 1998 (130 approved, 538 rejected) and 297 applicants to the Young Investigator programme (39 approved and 258 rejected applicants) from the years 2001 and 2002. If quantity and impact of research publications are used as a criterion for scientific achievement, the results of (zero-truncated) negative binomial models show that the peer review process indeed selects scientists who perform on a higher level than the rejected ones subsequent to application. We determined the extent of errors due to over-estimation (type I errors) and under-estimation (type 2 errors) of future scientific performance. Our statistical analyses point out that between 26% and 48% of the decisions made to award or reject an application show one of both error types. Even though for a part of the applicants, the selection committee did not correctly estimate the applicant's future performance, the results show a statistically significant association between selection decisions and the applicants' scientific achievements, if quantity and impact of research publications are used as a criterion for scientific achievement.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Data structure of this study.
Figure 2
Figure 2. Box plots for the number of papers published subsequent to application (first row).
Median numbers of citations for papers published prior to application (second row) and median numbers of citations for papers published subsequent to application (third row) (approved and rejected applicants for the LTF and YI programme). Note. Applications from 1998 (LTF programme) and 2001/2002 (YI programme); publication windows: from 1993 to the beginning of 2006 (LTF programme), from 1984 to the beginning of 2007 (YI programme); citation window: from year of publication to the beginning of 2006 and 2007, respectively. Since the downloading of citation counts was done in 2006 and 2007, respectively, one cannot expect high median citation counts yet for the most recent publications (see the graphs in the third row of the figure).
Figure 3
Figure 3. Box plots for h index values of approved and rejected applicants for the LTF and YI programme.

References

    1. Ziman J. Real science. What it is, and what it means. Cambridge, UK: Cambridge University Press; 2000.
    1. Marsh HW, Jayasinghe UW, Bond NW. Improving the peer-review process for grant applications-reliability, validity, bias, and generalizability. American Psychologist. 2008;63:160–168. - PubMed
    1. Geisler E. The metrics of science and technology. Westport, CT, USA: Quorum Books; 2000.
    1. Hemlin S. Research on research evaluations. Social Epistemology. 1996;10:209–250.
    1. National Institutes of Health. Recommendations for change at the NIH's Center for Scientific Review: phase 1 report, panel on scientific boundaries for review. Bethesda, MD, USA: Center for Scientific Review (CSR); 2000.

Publication types

MeSH terms