Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comment
. 2018 Nov;15(11):868-870.
doi: 10.1038/s41592-018-0194-9.

Deep learning to predict microscope images

Affiliations
Comment

Deep learning to predict microscope images

Roger Brent et al. Nat Methods. 2018 Nov.

Abstract

A species of neural network first described in 2015 can be trained to translate between images of the same field of view acquired by different modalities. Trained networks can use information inherent in grayscale images of cells to predict fluorescent signals.

PubMed Disclaimer

Figures

Fig 1.
Fig 1.
a) Translating to predict. Top. Network in Panel c is trained on pairs of images of cells acquired under Modality M1 (eg. brightfield) and Modality M2 (here, fluorescence images of signal from a protein the nuclear envelope). Bottom. Given a new brightfield image, the trained network predicts the nuclear envelope signal from information in an image of a field of unlabelled cells. b) A network that classifies. A simple NN similar to Alexnet that identifies images of cats. Its operation shown in captions. In an untrained network, values of the filter matrices (which are convolved with upstream information to create feature maps) and of the weights of the individual inputs to the downstream “fully connected” “neurons” (affine operations and activation functions) are set randomly, and values of the filter matrices and input weights move toward optimum values during training. The network is trained by exposure to images, some of which are labeled “Cat” and some of which are labeled “Not cat”. During training, feature maps at the top level come to recognize human-intelligible image elements such as edges, while those in deeper levels come to recognize more abstract aspects of images less easily described by the human observers. In these steps, information processing takes place via matrix operations and nonlinearization, while information storage, retention of knowledge acquired during training, takes place as changes in values stored in the filter matrices and weights of inputs to the downstream “neurons”. c) A network that translates. A “simple” NN descended from that in references, . This network translates between black and white images of cats and otherwise identical images that reveal the cats’ third or inner eyes. During training, pixelated M1 images are subjected to the series of the convolution, nonlinearization, and pooling steps as in panel b. Here, however, entries in downstreammost pooled maps are successively “upsampled” and combined with information coming from intermediate level feature maps on the left. The result for each pixel is compared with pixel intensities for the training M2 images. Training continues by adjustment of filter values until the network learns relationship between M1 and M2 images. The trained network can then operate on a new M1 image to produce a predicted M2 image.
Fig 1.
Fig 1.
a) Translating to predict. Top. Network in Panel c is trained on pairs of images of cells acquired under Modality M1 (eg. brightfield) and Modality M2 (here, fluorescence images of signal from a protein the nuclear envelope). Bottom. Given a new brightfield image, the trained network predicts the nuclear envelope signal from information in an image of a field of unlabelled cells. b) A network that classifies. A simple NN similar to Alexnet that identifies images of cats. Its operation shown in captions. In an untrained network, values of the filter matrices (which are convolved with upstream information to create feature maps) and of the weights of the individual inputs to the downstream “fully connected” “neurons” (affine operations and activation functions) are set randomly, and values of the filter matrices and input weights move toward optimum values during training. The network is trained by exposure to images, some of which are labeled “Cat” and some of which are labeled “Not cat”. During training, feature maps at the top level come to recognize human-intelligible image elements such as edges, while those in deeper levels come to recognize more abstract aspects of images less easily described by the human observers. In these steps, information processing takes place via matrix operations and nonlinearization, while information storage, retention of knowledge acquired during training, takes place as changes in values stored in the filter matrices and weights of inputs to the downstream “neurons”. c) A network that translates. A “simple” NN descended from that in references, . This network translates between black and white images of cats and otherwise identical images that reveal the cats’ third or inner eyes. During training, pixelated M1 images are subjected to the series of the convolution, nonlinearization, and pooling steps as in panel b. Here, however, entries in downstreammost pooled maps are successively “upsampled” and combined with information coming from intermediate level feature maps on the left. The result for each pixel is compared with pixel intensities for the training M2 images. Training continues by adjustment of filter values until the network learns relationship between M1 and M2 images. The trained network can then operate on a new M1 image to produce a predicted M2 image.

Comment on

References

    1. Christiansen EM, Yang SJ, Ando DM, Javaherian A, Skibinski G, Lipnick S, Mount E, O’Neil A, Shah K, Lee AK, Goyal P, Fedus W, Poplin R, Esteva A, Berndl M, Rubin LL, Nelson P, and Finkbeiner S (2018). In silico labeling: predicting fluorescent labels in unlabeled images. Cell. 173(3), 792–803 - PMC - PubMed
    1. Ounkomol C, Seshamani S, Maleckar MM, Collman F, and Johnson GR (2018) Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nature Methods. 2018. September 17 doi: 10.1038/s41592-018-0111-2. [Epub ahead of print] - DOI - PMC - PubMed
    1. Rosenblatt F (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review. 65, 386–408. - PubMed
    1. McCulloch WS, and Pitts WH (1943) A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–133. - PubMed
    1. Minsky M, and Papert S (1969) Perceptrons: An Introduction to Computational Geometry, MIT Press, Cambridge Massachusetts.

Publication types