Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2000 Aug;14(6):379-85.
doi: 10.1097/00005131-200008000-00001.

Radiographic fracture assessments: which ones can we reliably make?

Affiliations

Radiographic fracture assessments: which ones can we reliably make?

J Martin et al. J Orthop Trauma. 2000 Aug.

Abstract

Objective: To identify the fracture characteristics that can be reliably assessed by analysis of plain radiographs of tibial plateau fractures.

Design: Radiographic review study.

Participants: Five orthopaedic traumatologists served as observers.

Intervention: Observers made assessments based on the radiographs of fifty-six tibial plateau fractures. Precise definitions of the assessments to be made were agreed on by all observers. The tested assessments included raters' abilities to identify and locate fracture lines, identify the presence of fracture displacement and comminution, make quantitative measurements of displacement, and characterize qualitative features of fractures. For thirty-eight of the fractures that had a computed tomography (CT) scan available, assessments were repeated using both radiographs and CT scans.

Main outcome measures: To characterize interobserver reliability, percentage agreement and kappa statistics were calculated for categorical variables, and intraclass correlation coefficients (ICC) were calculated for noncategorical variables.

Results: Reliability of the assessments varied widely. Determining the location of fracture lines had the greatest reliability, whereas the subjective assessments of fracture stability and energy showed the poorest reliability. Although the ICCs for quantitative measurements approached acceptable levels, the tolerance limits were extremely wide. The addition of a CT scan improved the reliability of most assessments, but not to a statistically significant degree.

Conclusions: Many basic radiographic interpretations relied on in making treatment decisions are made variably by observers. Using experienced raters and precise definitions of fracture assessments does not guarantee a high level of agreement. Discrete assessments have higher interrater agreements than do more qualitative assessments. Quantitative measures have wide tolerance limits and, therefore, probably cannot be used reproducibly to classify fractures or make treatment decisions. We conclude the reliability of fracture classification is limited by raters' abilities to agree on basic radiographic assessments.

PubMed Disclaimer

Publication types

LinkOut - more resources