Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2012 Jan;65(1):47-52.
doi: 10.1016/j.jclinepi.2011.05.001. Epub 2011 Aug 9.

Panel discussion does not improve reliability of peer review for medical research grant proposals

Affiliations
Comparative Study

Panel discussion does not improve reliability of peer review for medical research grant proposals

Mikael Fogelholm et al. J Clin Epidemiol. 2012 Jan.

Abstract

Objective: Peer review is the gold standard for evaluating scientific quality. Compared with studies on inter-reviewer variability, research on panel evaluation is scarce. To appraise the reliability of panel evaluations in grant review, we compared scores by two expert panels reviewing the same grant proposals. Our main interest was to evaluate whether panel discussion improves reliability.

Methods: Thirty reviewers were randomly allocated to one of the two panels. Sixty-five grant proposals in the fields of clinical medicine and epidemiology were reviewed by both panels. All reviewers received 5-12 proposals. Each proposal was evaluated by two reviewers, using a six-point scale. The reliability of reviewer and panel scores was evaluated using Cohen's kappa with linear weighting. In addition, reliability was also evaluated for the panel mean scores (mean of reviewer scores was used as panel score).

Results: The proportion of large differences (at least two points) was 40% for reviewers in panel A, 36% for reviewers in panel B, 26% for the panel discussion scores, and 14% when the means of the two reviewer scores were used. The kappa for panel score after discussion was 0.23 (95% confidence interval: 0.08, 0.39). By using the mean of the reviewer scores, the panel coefficient was similarly 0.23 (0.00, 0.46).

Conclusion: The reliability between panel scores was higher than between reviewer scores. The similar interpanel reliability, when using the final panel score or the mean value of reviewer scores, indicates that panel discussions per se did not improve the reliability of the evaluation.

PubMed Disclaimer

Publication types

LinkOut - more resources