Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Sep 24:188:111987.
doi: 10.1016/j.jclinepi.2025.111987. Online ahead of print.

Write your abstracts carefully-The impact of abstract reporting quality on findability by semi-automated title-abstract screening tools

Affiliations
Free article

Write your abstracts carefully-The impact of abstract reporting quality on findability by semi-automated title-abstract screening tools

I Spiero et al. J Clin Epidemiol. .
Free article

Abstract

Background and objective: Evidence synthesis, such as the conduct of a systematic review or clinical guideline development, is time-consuming, laborious, and costly. This is largely due to the vast numbers of titles and abstracts that need to be screened. Semi-automated screening tools can accelerate this by prioritizing the most likely relevant abstracts by using an active learning strategy. The reliability of such tools in prioritizing abstracts is related to the modeling methods that the tool uses (ie, the ability of models to make reliable predictions of study relevance) and to the quality of the data that the modeling methods are applied to (ie, the consistency and completeness of reporting in the titles and abstracts of studies). Here, we aimed to gain insight into the latter by evaluating the association between abstract reporting characteristics and findability by semi-automated screening tools.

Methods: We tested the impact of reporting quality of abstracts on semi-automated screening tools by evaluating whether (I) abstract reporting quality (as scored by Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD)), (II) abstract structure, and (III) abstract terminology usage are associated with findability of relevant studies during semi-automated title-abstract screening. We performed simulations using a publicly available semi-automated screening tool, ASReview, and data from two previously conducted comprehensive systematic reviews of prognostic model studies.

Results: We found that better abstract reporting quality was clearly associated with greater findability by the semi-automated screening tool. To a smaller extent, the use of abstract subheadings was also associated with findability. Other abstract structure characteristics and abstract terminology usage were not associated with findability.

Conclusion: We conclude that better reporting quality of abstracts is associated with better findability by semi-automated title-abstract screening tools. This stresses the importance of adhering to abstract reporting guidelines, not only for consistent and transparent reporting across studies in general but also for enhancing the identification of relevant studies by screening tools during evidence synthesis.

Plain language summary: Systematic reviews summarize scientific evidence from literature to support clinical decision-making. In the conduct of a systematic review, thousands of papers have to be screened for relevance, making the process costly and laborious. To accelerate this process, several tools have been developed that can assist in identifying relevant scientific literature. Such tools screen through the abstracts of scientific papers and predict which papers are likely relevant for the systematic review. However, the performance of these screening tools may depend on whether information is accurately and completely reported in the abstracts. Here, we aimed to evaluate the impact of abstract reporting quality on the performance of screening tools. We used data from a set of scientific papers of which the abstract reporting quality was scored manually (following an existing reporting checklist called TRIPOD), and we applied an existing screening tool as an example. We simulated the procedure of conducting a scientific review and evaluated whether the relevance predictions by the screening tool were associated with abstract reporting quality. We found that relevant scientific papers with abstracts that had poor reporting quality were more difficult to identify as relevant by the screening tool. This finding highlights the importance of adhering to reporting guidelines, not only for transparency of scientific findings but also for optimal usage of screening tools in the conduct of a systematic review.

Keywords: Active learning; Evidence synthesis; Prioritized screening; Reporting guidelines; Reporting quality of abstracts; Technology-assisted reviewing.

PubMed Disclaimer

Conflict of interest statement

Declaration of competing interest K.G.M. Moons was involved in the development of the TRIPOD reporting guideline. All other authors have nothing to declare.

LinkOut - more resources