Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2013 Oct 4;3(10):741-56.
doi: 10.7150/thno.6815.

Quantitative statistical methods for image quality assessment

Affiliations
Review

Quantitative statistical methods for image quality assessment

Joyita Dutta et al. Theranostics. .

Abstract

Quantitative measures of image quality and reliability are critical for both qualitative interpretation and quantitative analysis of medical images. While, in theory, it is possible to analyze reconstructed images by means of Monte Carlo simulations using a large number of noise realizations, the associated computational burden makes this approach impractical. Additionally, this approach is less meaningful in clinical scenarios, where multiple noise realizations are generally unavailable. The practical alternative is to compute closed-form analytical expressions for image quality measures. The objective of this paper is to review statistical analysis techniques that enable us to compute two key metrics: resolution (determined from the local impulse response) and covariance. The underlying methods include fixed-point approaches, which compute these metrics at a fixed point (the unique and stable solution) independent of the iterative algorithm employed, and iteration-based approaches, which yield results that are dependent on the algorithm, initialization, and number of iterations. We also explore extensions of some of these methods to a range of special contexts, including dynamic and motion-compensated image reconstruction. While most of the discussed techniques were developed for emission tomography, the general methods are extensible to other imaging modalities as well. In addition to enabling image characterization, these analysis techniques allow us to control and enhance imaging system performance. We review practical applications where performance improvement is achieved by applying these ideas to the contexts of both hardware (optimizing scanner design) and image reconstruction (designing regularization functions that produce uniform resolution or maximize task-specific figures of merit).

Keywords: image quality metrics; local impulse response; resolution; tomography; variance..

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interest exists.

Figures

Figure 1
Figure 1
Schematic illustrating the model-based iterative reconstruction procedure. The forward model predicts the data, formula image, as a function of the image formula image. The reconstruction routine seeks to determine the unknown image as some explicit or implicit function, formula image, of the data. For iterative reconstruction, this function is an implicit function given by the maximum of some objective function: formula image. The objective function, formula image, depends on both the goodness of fit between the predicted and measured data and on prior information about the unknown image. At the end of each iteration, the current image formula image is replaced by an updated estimate formula image until some stopping criterion is reached.
Figure 2
Figure 2
Comparison of predicted standard deviation from an analytical expression given in and sample standard deviation calculated from Monte Carlo simulations as in Figure 7 of . The standard deviation was calculated at the central pixel indicated by a (+) symbol inside the 2D digital phantom image in the inset. The image size was 128 by 64 pixels with pixel size 4.5 mm, and the sinogram size was 192 radial bins by 96 angular bins with a radial bin spacing of 4.5 mm. The emission activity was 3 in the hot region (black), 2 in the background (dark gray), and 1 in the cold region (light gray). The attenuation coefficient was 0.013/mm in the hot region, 0.0096/mm in the background, and 0.003/mm in the cold region. The simulated photon counts were 0.25M, 1M, 4M, and 16M. The background events such as randoms and scatter were simulated as a uniform field with 10% of true events. For each photon count, 100 data sets contaminated by Poisson noise were generated. For each data set, a quadratically penalized likelihood image was reconstructed using 20 iterations of an ordered subset version of De Pierro's modified EM with 8 subsets. The regularization parameter was chosen to be proportional to the total count as in . The standard errors of the standard deviation were computed by bootstrapping.
Figure 3
Figure 3
Variance images from the simulation study in Figure 2. The left column shows empirical estimates obtained from Monte Carlo simulations with 100 noise realizations. The right column shows the predicted variance from a single noise realization using the analytical approach in . The rows correspond to the following photon count/regularization parameter combinations: 0.25M/0.0156, 1M/0.0625, 4M/0.25, 16M/1.
Figure 4
Figure 4
Horizontal and vertical profiles (concatenated left to right) through the linearized LIRs at the three locations indicated by red, blue, and green markers in the digital phantom (top row). This figure compares the UQP (middle row) and the modified quadratic penalty with spatially modulated weights based on aggregate certainty measures as proposed in (bottom row). Details about the simulated system are provided in the caption for Figure 2. With the UQP, the resolution worsens with increasing activity (from left to right) as revealed by both the horizontal and vertical profiles. The modified penalty mitigates this degradation in resolution. This study is similar to that shown in Figure 4 of . It must be noted that, while Figure 4 of also compares the results of eqs. (6) (circles) and (10) (solid lines), our results are based only on eq. (10).

Similar articles

Cited by

References

    1. Leahy RM, Qi J. Statistical approaches in quantitative positron emission tomography. Stat Comput. 2000;10:147–65.
    1. Shepp LA, Logan BF. The Fourier reconstruction of a head section. IEEE Trans Nucl Sci. 1974;21:21–43.
    1. Barrett HH, Swindell W. Radiological imaging: the theory of image formation, detection, and processing. Academic Press. 1996.
    1. Wilson DW, Tsui BMW. Noise properties of filtered-backprojection and ML-EM reconstructed emission tomographic images. IEEE Trans Nucl Sci. 1993;40:1198–203.
    1. Kaufman L. Implementing and accelerating the EM algorithm for positron emission tomography. IEEE Trans Med Imaging. 1987;6:37–51. - PubMed

Publication types

MeSH terms