Interobserver and intraobserver reliability in the load sharing classification of the assessment of thoracolumbar burst fractures
- PMID: 15682019
- DOI: 10.1097/01.brs.0000152095.85927.24
Interobserver and intraobserver reliability in the load sharing classification of the assessment of thoracolumbar burst fractures
Abstract
Study design: The Load Sharing Classification of spinal fractures was evaluated by 5 observers on 2 occasions.
Objective: To evaluate the interobserver and intraobserver reliability of the Load Sharing Classification of spinal fractures in the assessment of thoracolumbar burst fractures.
Summary of background data: The Load Sharing Classification of spinal fractures provides a basis for the choice of operative approaches, but the reliability of this classification system has not been established.
Methods: The radiographic and computed tomography scan images of 45 consecutive patients with thoracolumbar burst fractures were reviewed by 5 observers on 2 different occasions 3 months apart. Interobserver reliability was assessed by comparison of the fracture classifications determined by the 5 observers. Intraobserver reliability was evaluated by comparison of the classifications determined by each observer on the first and second sessions. Ten paired interobserver and 5 intraobserver comparisons were then analyzed with use of kappa statistics.
Results: All 5 observers agreed on the final classification for 58% and 73% of the fractures on the first and second assessments, respectively. The average kappa coefficient for the 10 paired comparisons among the 5 observers was 0.79 (range 0.73-0.89) for the first assessment and 0.84 (range 0.81-0.95) for the second assessment. Interobserver agreement improved when the 3 components of the classification system were analyzed separately, reaching an almost perfect interobserver reliability with the average kappa values of 0.90 (range 0.82-0.97) for the first assessment and 0.92 (range 0.83-1) for the second assessment. The kappa values for the 5 intraobserver comparisons ranged from 0.73 to 0.87 (average 0.78), expressing at least substantial agreement; 2 observers showed almost perfect intraobserver reliability. For the 3 components of the classification system, all observers reached almost perfect intraobserver agreement with the kappa values of 0.83 to 0.97 (average, 0.89).
Conclusions: Kappa statistics showed high levels of agreement when the Load Sharing Classification was used to assess thoracolumbar burst fractures. This system can be applied with excellent reliability.
MeSH terms
LinkOut - more resources
Full Text Sources
Medical
Research Materials