Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2010 Dec;53(4):1181-96.
doi: 10.1016/j.neuroimage.2010.07.020. Epub 2010 Jul 14.

Highly accurate inverse consistent registration: a robust approach

Affiliations

Highly accurate inverse consistent registration: a robust approach

Martin Reuter et al. Neuroimage. 2010 Dec.

Abstract

The registration of images is a task that is at the core of many applications in computer vision. In computational neuroimaging where the automated segmentation of brain structures is frequently used to quantify change, a highly accurate registration is necessary for motion correction of images taken in the same session, or across time in longitudinal studies where changes in the images can be expected. This paper, inspired by Nestares and Heeger (2000), presents a method based on robust statistics to register images in the presence of differences, such as jaw movement, differential MR distortions and true anatomical change. The approach we present guarantees inverse consistency (symmetry), can deal with different intensity scales and automatically estimates a sensitivity parameter to detect outlier regions in the images. The resulting registrations are highly accurate due to their ability to ignore outlier regions and show superior robustness with respect to noise, to intensity scaling and outliers when compared to state-of-the-art registration tools such as FLIRT (in FSL) or the coregistration tool in SPM.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Robust registration of longitudinal tumor data (same slice of five acquisitions at different times). Left: target (first time point). Top row: aligned images. Bottom row: overlay of detected change/outlier regions (red/yellow). The outlier influence is automatically reduced during the iterative registration procedure to obtain highly accurate registrations of the remainder of the image; see also Fig. 6.
Figure 2
Figure 2
The robust Tukey's biweight function (green) limits the influence of large errors as opposed to the parabola (red).
Figure 3
Figure 3
Distribution of residuals after successful registration together with the Gaussian (red) and robust (green) models (produced by the two functions from Fig. 2).
Figure 4
Figure 4
Zoom-in of the residual distribution of Fig. 3 with weighted residual distribution overlayed in green. It can be seen that the heavy tails are significantly reduced when using the robust weights.
Figure 5
Figure 5
Gaussian filter at the center (σ=max(width,height,depth)6).
Figure 6
Figure 6
The red/yellow regions (bottom row) are detected as outlier regions during the registraion procedure of this Multiecho MPRAGE test-retest data. Their influence is automatically reduced. It can be seen that the detected outliers agree with the non-rigid differences after successful registration (top row) located mainly in the neck, eye, scalp and jaw/tongue region; see also Fig. 1.
Figure 7
Figure 7
Difference after alignment. Left: FLIRT MI (the visible structures in the brain indicate misalignment). Right: Robust method (accurate alignment, residual differences due to noise and resampling). The top shows the difference images and the bottom a zoom-in into the aligned target (red) and source (green). A good alignment should be yellow (right) while the inaccurate registration shows misaligned red and green edges (left).
Figure 8
Figure 8
Comparison of inverse consistency using different methods: FLIRT LS: least squares, CR: correlation ratio, MI: mutual information, SPM, LS (our implementation with least squares instead of robust error function), Robust registration, Robust-I (+intensity scaling) and Robust-I-SS (subsampling on the highest resolution). The white circles represent the individual registrations.
Figure 9
Figure 9
Close-ups of test images: original (left) with Gaussian noise σ = 10 (middle) and with outlier boxes (right).
Figure 10
Figure 10
Accuracy of different methods (see Fig. 8). The four different tests are: random rigid motion, additional Gaussian noise (σ = 10mm), 80 boxes of outlier data and intensity scaling.
Figure 11
Figure 11
Accuracy of different methods (see Fig. 8) with respect to SPM (on the norm images).
Figure 12
Figure 12
Top: Accuracy of [Robust] for each individual subject. Bottom: Mean accuracy of the methods, where [Robust] and [Robust-I] depend on the saturation level (fixed across all subjects). It can be seen (bottom) that [Robust] reaches its minimal average registration error at the fixed saturation level of c = 14 and [Robust-I] at c = 8.5. For most fixed saturation levels, both methods perform better on average than FLIRT or SPM (note, the averages of [FLIRT-LS] and [FLIRT-CR] almost coincide, compare with Fig. 11 middle).
Figure 13
Figure 13
Top: Fixed low saturation of c = 4.685 (high outlier sensitivity) in a registration with intensity differences and non-linearities results in too many outlier and consequently in misalignment. Bottom: Automatic sensitivity estimation adjusts to a higher saturation value (low outlier sensitivity) to register the images successfully. The detected outlier regions are labeled red/yellow.
Figure 14
Figure 14
Registration accuracy for each subject depending on center focused weight W (Robust top, Robust-I bottom). Red horizontal line: averaging best registration per subject. Black curve: average performance at specific W. Dashed curves: individual subject's results.
Figure 15
Figure 15
Error of motion correction task in brain region for different registration methods (top: sum of squared errors comparison, bottom: edge count of average image). Both plots show the signed difference to Robust-I.

References

    1. Ashburner J, Friston K. Multimodal image coregistration and partitioning – a unified framework. NeuroImage. 1997;6(3):209–217. - PubMed
    1. Ashburner J, Friston K. Nonlinear spatial normalization using basis functions. Human Brain Mapping. 1999;7(4):254–266. - PMC - PubMed
    1. Ashburner J, Neelin P, Collins DL, Evans A, Friston K. Incorporating prior knowledge into image registration. NeuroImage. 1997;6(4):344–352. - PubMed
    1. Avants B, Gee JC. Geodesic estimation for large deformation anatomical shape averaging and interpolation. NeuroImage. 2004;23(1):139–150. - PubMed
    1. Bajcsy R, Kovavcivc S. Multiresolution elastic matching. Computer Vision Graphics and Image Processing. 1989;46(1):1–21.

Publication types

LinkOut - more resources