Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2002 Sep 30:1:1.
doi: 10.1186/1477-5751-1-1.

Prominent medical journals often provide insufficient information to assess the validity of studies with negative results

Affiliations

Prominent medical journals often provide insufficient information to assess the validity of studies with negative results

Randy S Hebert et al. J Negat Results Biomed. .

Abstract

Background: Physicians reading the medical literature attempt to determine whether research studies are valid. However, articles with negative results may not provide sufficient information to allow physicians to properly assess validity.

Methods: We analyzed all original research articles with negative results published in 1997 in the weekly journals BMJ, JAMA, Lancet, and New England Journal of Medicine as well as those published in the 1997 and 1998 issues of the bimonthly Annals of Internal Medicine (N = 234). Our primary objective was to quantify the proportion of studies with negative results that comment on power and present confidence intervals. Secondary outcomes were to quantify the proportion of these studies with a specified effect size and a defined primary outcome. Stratified analyses by study design were also performed.

Results: Only 30% of the articles with negative results comment on power. The reporting of power (range: 15%-52%) and confidence intervals (range: 55-81%) varied significantly among journals. Observational studies of etiology/risk factors addressed power less frequently (15%, 95% CI, 8-21%) than did clinical trials (56%, 95% CI, 46-67%, p < 0.001). While 87% of articles with power calculations specified an effect size the authors sought to detect, a minority gave a rationale for the effect size. Only half of the studies with negative results clearly defined a primary outcome.

Conclusion: Prominent medical journals often provide insufficient information to assess the validity of studies with negative results.

PubMed Disclaimer

Similar articles

Cited by

References

    1. Browner WS, Newman TB. Are all significant P values created equal? The analogy between diagnostic tests and clinical research. JAMA. 1987;257:2459–2463. doi: 10.1001/jama.257.18.2459. - DOI - PubMed
    1. Goodman SN, Berlin JA. The use of predicted confidence intervals when planning experiments and the misuse of power when interpreting results. Ann Intern Med. 1994;121:200–206. - PubMed
    1. Gardner MJ, Machin D, Campbell MJ. Use of checklists in assessing the statistical content of medical studies. In: Gardner MJ, Altman DG, editor. Statistics with Confidence. London: British Medical Journal; 1989. pp. 101–108.
    1. Halpern SD, Karlawish JH, Berlin JA. The continuing unethical conduct of underpowered clinical trials. JAMA. 2002;288:358–362. doi: 10.1001/jama.288.3.358. - DOI - PubMed
    1. Raju TN, Langenberg P, Sen A, Aldana O. How much 'better' is good enough? The magnitude of treatment effect in clinical trials. Am J Dis Child. 1992;146:407–411. - PubMed

LinkOut - more resources