Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Oct:12265:25-35.
doi: 10.1007/978-3-030-59722-1_3. Epub 2020 Sep 29.

Automated Measurements of Key Morphological Features of Human Embryos for IVF

Affiliations

Automated Measurements of Key Morphological Features of Human Embryos for IVF

B D Leahy et al. Med Image Comput Comput Assist Interv. 2020 Oct.

Abstract

A major challenge in clinical In-Vitro Fertilization (IVF) is selecting the highest quality embryo to transfer to the patient in the hopes of achieving a pregnancy. Time-lapse microscopy provides clinicians with a wealth of information for selecting embryos. However, the resulting movies of embryos are currently analyzed manually, which is time consuming and subjective. Here, we automate feature extraction of time-lapse microscopy of human embryos with a machine-learning pipeline of five convolutional neural networks (CNNs). Our pipeline consists of (1) semantic segmentation of the regions of the embryo, (2) regression predictions of fragment severity, (3) classification of the developmental stage, and object instance segmentation of (4) cells and (5) pronuclei. Our approach greatly speeds up the measurement of quantitative, biologically relevant features that may aid in embryo selection.

Keywords: Deep Learning; Human Embryos; In-Vitro Fertilization.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Instead of performing one task, our unified pipeline extracts multiple features from embryos. We first segment the image to locate the embryo (panel a), colored according to segmentation. The segmentation provides a region-of-interest (ROI, white box) for the other 4 networks, starting with embryo fragmentation (b); the image shown has a predicted fragmentation score of 0.26. If the embryo’s fragmentation score is less than 1.5, we classify the developmental stage (c); this image is classified as a 2-cell embryo. We then detect cells in cleavage stage embryos (orange contours in d) and pronuclei in 1-cell embryos (magenta contours in e).
Fig. 2.
Fig. 2.
The zona pellucida network (ResNet101 FCN) performs semantic segmentation on the input image, predicting four class probabilities for each pixel (colored as purple: outside well, pink: inside well, green: zona pellucida, cyan: inside zona). Middle: 12 representative segmentations of 3 embryos from the test set. Right: the per-pixel accuracies of the segmentation on each class in the test set.
Fig. 3.
Fig. 3.
Left: The fragmentation network (InceptionV3 architecture) scores embryos with a real number from 0 – 3; the image at left is scored as a fragmentation of 2.46. Center: 8 representative fragmentation scores on the test set, shown as image: score pairs. Right: The distribution of the network’s prediction given the ground-truth label on the test set. The green distribution corresponds to images with a ground-truth label of 0; orange those labeled as 1; blue, 2; pink, 3.
Fig. 4.
Fig. 4.
Left: The stage classification CNN (ResNeXt101) predicts a per-class probability for each image; the two bar plots show the predicted probabilities for the two images. Center: We use dynamic programming to find the most-likely non-decreasing trajectory (orange); the circled times t1 and t2 correspond to the predictions at left. Right: The distribution of predictions given the true labels, measured on the test set.
Fig. 5.
Fig. 5.
The cell detection network (Mask-RCNN, ResNet50 backbone) takes an image (left) and proposes candidates as a combined object mask and confidence score from 0–1 (second from left). Center: The boundaries of the object mask represented as the cell’s contours (orange, center). Second from right: 12 cell instance segmentations for 4 embryos from the test set (shown as orange contours overlaid on the original image). Right: Histogram of the ratio of predicted to true areas for correctly identified cells in the test set.
Fig. 6.
Fig. 6.
The pronuclei detection network (Mask-RCNN, ResNet50 backbone) takes an image (left) and proposes candidates as a combined object mask and confidence score from 0–1 (second from left). Center: The boundaries of the object mask represented as the pronuclei contours (magenta, center). Second from right: 12 pronuclei instance segmentations for 4 embryos from the test set (shown as magenta contours overlaid on the original image); the rightmost images illustrate true negatives after the pronuclei have faded. Right: Histogram of the ratio of predicted to true areas for correctly identified pronuclei in the test set.

References

    1. Alikani M, Cohen J, Tomkin G, Garrisi GJ, Mack C, Scott RT: Human embryo fragmentation in vitro and its implications for pregnancy and implantation. Fertility and sterility 71(5), 836–842 (1999) - PubMed
    1. Amir H, Barbash-Hazan S, Kalma Y, Frumkin T, Malcov M, Samara N, Hasson J, Reches A, Azem F, Ben-Yosef D: Time-lapse imaging reveals delayed development of embryos carrying unbalanced chromosomal translocations. Journal of assisted reproduction and genetics 36(2), 315–324 (2019) - PMC - PubMed
    1. Armstrong S, Bhide P, Jordan V, Pacey A, Marjoribanks J, Farquhar C: Time-lapse systems for embryo incubation and assessment in assisted reproduction. Cochrane Database of Systematic Reviews (2019) - PMC - PubMed
    1. Bellman R: Dynamic programming. Science 153(3731), 34–37 (1966) - PubMed
    1. Broughton DE, Moley KH: Obesity and female infertility: potential mediators of obesity’s impact. Fertility and sterility 107(4), 840–847 (2017) - PubMed

LinkOut - more resources