Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2011 Aug;243(2):154-71.
doi: 10.1111/j.1365-2818.2011.03489.x. Epub 2011 Mar 1.

Robust, globally consistent and fully automatic multi-image registration and montage synthesis for 3-D multi-channel images

Affiliations

Robust, globally consistent and fully automatic multi-image registration and montage synthesis for 3-D multi-channel images

C-L Tsai et al. J Microsc. 2011 Aug.

Abstract

The need to map regions of brain tissue that are much wider than the field of view of the microscope arises frequently. One common approach is to collect a series of overlapping partial views, and align them to synthesize a montage covering the entire region of interest. We present a method that advances this approach in multiple ways. Our method (1) produces a globally consistent joint registration of an unorganized collection of three-dimensional (3-D) multi-channel images with or without stage micrometer data; (2) produces accurate registrations withstanding changes in scale, rotation, translation and shear by using a 3-D affine transformation model; (3) achieves complete automation, and does not require any parameter settings; (4) handles low and variable overlaps (5-15%) between adjacent images, minimizing the number of images required to cover a tissue region; (5) has the self-diagnostic ability to recognize registration failures instead of delivering incorrect results; (6) can handle a broad range of biological images by exploiting generic alignment cues from multiple fluorescence channels without requiring segmentation and (7) is computationally efficient enough to run on desktop computers regardless of the number of images. The algorithm was tested with several tissue samples of at least 50 image tiles, involving over 5000 image pairs. It correctly registered all image pairs with an overlap greater than 7%, correctly recognized all failures, and successfully joint-registered all images for all tissue samples studied. This algorithm is disseminated freely to the community as included with the Fluorescence Association Rules for Multi-Dimensional Insight toolkit for microscopy (http://www.farsight-toolkit.org).

PubMed Disclaimer

Figures

Figure 1
Figure 1
Illustrating the importance of fully three-dimensional (3-D) registration. Adjacent image stacks with an equal number of optical slices may still be offset axially relative to the objects in the tissue. (A) Two adjacent confocal stacks represented by horizontal rectangles, and hypothetical objects within the confocal image stacks represented by colored ovals. Arbitrarily registering the stacks using the top or bottom optical slice of the image stack does not yield an accurate montage. (B) One stack must be shifted along the z axis relative to the other. In practice, confocal image stacks often contain varying numbers of optical slices. Panels (C) and (D) are slice 30 from adjacent confocal stacks of rat brain tissue stained with a fluorescent antibody against the microglial-specific protein Iba-1. However, they are not the matching slices. For correct alignment, the stack in (D) should be shifted about 5 slices in the z-direction. Panel (E) shows slice 30 of the correctly aligned montage produced by our 3-D registration algorithm. Panels (C), (D), and (E) are taken from data set #2.
Figure 2
Figure 2
Illustrating preliminary estimation of the lateral offset for a pair of 5-channel images of the cortical surface (blue: nuclei, purple: Nissl, Green: microvasculature, yellow: microglia, red: astrocytes). Panels (A) and (B) show the maximum-intensity projections of the two adjacent optical stacks with all 5 channels overlaid. Panels (C) and (D) show the generic landmarks for these images overlaid on the fusion image derived by combining the 5-channel data into one. The yellow circles indicate corners and the yellow lines indicate the locations and normal directions of edge points. Panel (E) shows the alignment produced by the GDB-ICP pair-wise registration of the projection. The transformation computed from panels (C) and (D) was applied to the 5-color projections, and used to construct a 2-D montage of these two projection images. Images in this figure are taken from data set #2.
Figure 3
Figure 3
Illustrating the need for joint registration with global consistency. (A) A 4-image montage based on pair-wise registration. The blue box indicates the reference image for the montage. Corresponding points between the neighboring images are mapped inconsistently to different locations, resulting in blurry overlap regions outside the reference image, circled in red. (B) A montage of the same 4 images constructed with globally consistent joint registration where points are well aligned even outside the reference image space. The boxes outline the 4 images that were jointly registered. Images in this figure are from data set #2.
Figure 4
Figure 4
Flowchart demonstrating the basic steps, including the inputs and outputs, for performing (in turn) pairwise registration, joint registration, and montage synthesis. Pairwise registration is performed multiple times (for each possible image pair) but is only depicted in the flowchart once; practically the user will perform this step as many times as required for the dataset before moving on to joint registration.
Figure 5
Figure 5
Maximum intensity projection of the montage of Dataset #1 taken from the rat entorhinal cortex. The montage is 4,756×2,943×58 voxels in size. The blue channel shows the cell nuclei, and the green channel indicates the neurons. Spatial co-localization of green and blue signal (turquoise) indicates the locations of neurons. The confocal images were obtained from sections of rat brain sectioned horizontally and stained with the flurorescent nuclear dye To-Pro-3 iodide (Invitrogen) and the fluorescent secondary antibody Alexa Fluor 488 (Invitrogen) following initial labeling with a primary antibody against NeuN, which is expressed specifically in neurons.).
Figure 6
Figure 6
Maximum intensity projection of the montage of Dataset #2 taken from the rat cerebral cortex. The montage is 4,786×13,776×68 voxels in size. The five channels display: microglia in yellow, astrocytes in red, neurons in purple, vessel laminae in green, and nuclei in blue. The montage covers an entire strip of cerebral cortex, extending into corpus callosum and hippocampus.
Figure 7
Figure 7
This figure illustrates robustness of montage synthesis in coping with spatial distortion in the data set. Panel A shows three optical slices (16, 17, and 18) from the confocal stack collected at location H,05 as marked in panel B. Slice thickness is 0.7 microns. The immersion oil during collection of this data set was inadvertently contaminated with water, presumably altering the index of refraction of the immersion medium and resulting in spatial distortion between optical slices. Moving up and down through the collected image stack results in a “rippling” effect; the x-y positions of neuronal nuclei appear to jitter back and forth as one moves through adjacent slices in the z-plane. One can note the right-left shifts in position of two neuronal nuclei (indicated by yellow arrows) moving from slice 16 to 18. Panel B shows the relative locations of each confocal image stack collected in the data set; the three confocal stacks affected by spatial distortions between optical slices are indicated by yellow crosshatches. Panel C shows the result of automated montage synthesis; despite irregular distortions in individual optical slices, the algorithms were able to correctly order the problematic optical stacks into the correct positions in the final montage. Images in this figure are from dataset #3.
Figure 8
Figure 8
Summary of registration and self-diagnosis performance as a function of image overlap. The NC is plotted for the 3 datasets in Tables I and II. Each data point in these scatter plots corresponds to an image pair. The horizontal lines indicate automatically estimated threshold values – data points above this threshold are declared failures by the joint registration (Step 2). Data points with NC error of 1 correspond to image pairs that overlap but fail to register. Points with zero overlap indicate image pairs that should have failed. (A) The scatter plot for Dataset #1 with the error threshold equal to 0.28. (B) The scatter plot for Dataset #2 with the error threshold equal to 0.16. (C) The scatter plot for Dataset #3 with the error threshold equal to 0.15. The wider spread of the errors for correctly registered pairs below the threshold value can be explained by the less favorable imaging conditions that result in irregular (non-affine) image distortion for some image pairs.
Figure 9
Figure 9
The only instance of incorrect alignment from the Dataset #2, that was however, correctly detected as a registration failure automatically by the joint registration (Step 2). (A, B) The original images. (C) Registration results displayed by projecting the image in Panel A in red, and the Panel B in green. The NC error for this image pair is 0.28 pixels (>0.16), and is correctly recognized as an outlier in the joint registration.
Figure 10
Figure 10
Illustrating successful registration of images with problematic staining. The nuclear channel is in blue and the neuronal channel in green. The confocal stack in A contains higher background staining when compared to the adjacent stack in B. The arrows indicate the corresponding areas in the two image stacks. C is a crop of the montage of the neuronal channel. It demonstrates accurate alignment using the neuronal channels with neurons from A in green and neurons from B in red. When the two images are well aligned, the neurons in the overlap area correctly appear yellow, as seen here. Images in this figure are from dataset #1.
Figure 11
Figure 11
An example of accurate alignment of images around an electrode insertion site with very different magnifications. The dark hole in the center is indicative of the insertion site. The two separate images were taken using a 20× objective (0.9NA) with a zoom of 1.0 and 2.5 equaling a final magnification of 20× and 50×. (A) The 20× image of size 1,024×1,024×106, covering a tissue volume of 772×772×85μm3 (B) The 50× image of size 1,024×1,024×148, covering a tissue volume of 310×310×89μm3. (C) The result of registration. The 50× image shown in red is transformed to the space of the 20× image shown in green. The insertion site in yellow shows accurate alignment of the two images.

References

    1. Al-Kofahi O, Can A, Lasek S, Szarowski DH, Turner JN, Roysam B. Algorithms for accurate 3D registration of neuronal images acquired by confocal scanning laser microscopy. J Microsc. 2002;211:8–18. - PubMed
    1. Appleton B, Bradley AP, Wildermoth M. Towards optimal image stitching for virtualmicroscopy. Digital Image Computing: Techniques and Applications. 2005:299–306.
    1. Bajcsy P, Lee S-C, Lin A, Folberg R. Three-dimensional volume reconstruction of extracellular matrix proteins in uveal melanoma from fluorescent confocal laser scanning microscope images. J Microsc. 2006;221:30–45. - PMC - PubMed
    1. Beck JC, Murray JA, Willows D, Cooper MS. Computer-assisted visualizations of neural networks: expanding the field of view using seamless confocal montaging. J Neurosci Methods. 2000;98:155–163. - PubMed
    1. Becker D, Ancin H, Szarowski D, Turner JN, Roysam B. Automated 3-D montage synthesis from laser-scanning confocal images: application to quantitative tissue-level cytological analysis. Cytometry. 1996;25:235–245. - PubMed

Publication types