Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2023 Jan 4;12(5):777-793.
doi: 10.1515/nanoph-2022-0512. eCollection 2023 Mar.

From 3D to 2D and back again

Affiliations
Review

From 3D to 2D and back again

Niyazi Ulas Dinc et al. Nanophotonics. .

Abstract

The prospect of massive parallelism of optics enabling fast and low energy cost operations is attracting interest for novel photonic circuits where 3-dimensional (3D) implementations have a high potential for scalability. Since the technology for data input-output channels is 2-dimensional (2D), there is an unavoidable need to take 2D-nD transformations into account. Similarly, the 3D-2D and its reverse transformations are also tackled in a variety of fields such as optical tomography, additive manufacturing, and 3D optical memories. Here, we review how these 3D-2D transformations are tackled using iterative techniques and neural networks. This high-level comparison across different, yet related fields could yield a useful perspective for 3D optical design.

Keywords: 3D optical memory; additive manufacturing; inverse design; optical tomography; photonic circuit design.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
Optical tomography. (a) An overview of the optical tomography problem. A 3D object is illuminated with different plane waves, and 2D quantitative phase projections are measured for each illumination angle. (b) A standard off-axis holography setup for refractive index tomography. The illumination angle can be controlled using a pair of galvo mirrors. (c) Iterative optical diffraction tomography (ODT): A forward model (such as single-scattering [23] or beam propagation method [24] computes the 2D projections for each illumination angle. By comparing this field to the measurements, a loss function is calculated, which is minimized by improving the reconstruction of the 3D refractive index iteratively. (d) Comparison of ODT reconstruction results for a hepatocyte cell using the Rytov approximation and iterative ODT with edge-preserving regularization (Adapted from [23] Copyright OPTICA). The scale bar is 5 µm. (e) Tomographic results of two 10 µm polystyrene beads immersed in oil with n0 = 1.516 based on inverse Radon transform and learning tomography (Adapted from [24], Copyright OPTICA). (f) 3D reconstruction of a red blood cell using TomoNet. (Adapted from [28] Copyright SPIE). Figures (e) and (f) show that learning tomography and TomoNet solve underestimation and elongation of the reconstructions.
Figure 2:
Figure 2:
Volumetric additive manufacturing as tomographic back-projection. (a) Radon transform allows calculating the set of 2D tomographic patterns from the 3D model. (b) The back-projection of these patterns into a rotating vial containing a photosensitive resin triggers its solidification. (c) Tomographic VAM exploits the nonlinear thresholded response of corresponding photosensitive materials to light-induced polymerization. This polymerization threshold ensures the fabrication of the target object only, even if the resin outside the object’s target volume inevitably receives some light after having been illuminated from multiple angles. The liquid unpolymerized resin can be washed away after the print. Tomographic VAM has been used to (d) produce high-resolution support-free structures (taken from [36], Copyright Springer-Nature); (e) overprint around pre-existing solid objects (taken from [37], Copyright AAAS); fabricate (f) objects with tunable mechanical properties from thiol-ene resins (taken from [48], Copyright Wiley), (g) heat-resistant polymer-derived silicon oxycarbide ceramics (rearranged from [50], temperature indicated, Copyright Wiley), (h) nanoparticle-based silica glass devices (taken from [51], Copyright AAAS); and (i) bioprint cell-laden hydrogels (taken from [46], Copyright Wiley). Scale bars: (d, f, g) 5 mm, (e) 10 mm, (h) 2 mm, (i) (from top left to bottom right) 2 mm, 1 mm, 500 μm, 250 μm.
Figure 3:
Figure 3:
3D optical memory implementations. (a) Diffraction from a sinusoidal grating according to Bragg matching condition. On Ewald’s sphere representation, kR, kS, and kG refer to the wave vectors of the reference, signal, and recorded grating respectively. The reference beam simply addresses and reads out the data stored in the grating. (b) Simple sketch of recording and read-out for a two-photon technique. Here, the address beam (analog of the reference beam in the case of holography) is depicted as a light sheet accessing a layer of the volume and the data beam encodes the information. During the read-out, the address beam selects the target layer to excite a fluorescence signal that would be modulated with respect to the recorded data (following the description in [57]).
Figure 4:
Figure 4:
Different holographic strategies. (a) 90° geometry decoupling the non-diffracted beam and modulated diffracted beam. kR, kS, and kG refer to the wave vectors of the reference, signal, and recorded grating, respectively. (b) Bragg-selectivity in k-space separates the different pages of data by mapping them on different Ewald’s spheres due to the different carrier frequencies. The vector clouds are designated by the shaded regions where the size of the cloud inversely depends on the dimensions of the volume hologram, L x and L z , as shown. The same argument applies to the y-direction as well. (c) Schematic for a phase mask stack. The stacked phase masks exhibit volumetric properties when the separation between them is large enough for Fresnel propagation to take place. The varying phase can be encoded as varying thicknesses, which enables the fabrication with a binary-index approach.
Figure 5:
Figure 5:
Optical interconnections design. The goal is the determination of geometrical and material properties of the central grey volume that maps input Eir to output Eor electric fields with maximal efficiency and minimal cross talk.
Figure 6:
Figure 6:
Different approaches for inverse design of volume optical elements. (a) Optically recorded holograms obtained from the interference of incident field Ei (black) and the conjugated objective field E¯o* (blue). (b) Learning tomography. The input field is propagated through the guess structure by BPM (black). The predicted output Eo is compared with the target field E¯o* and the error is backpropagated to iteratively update the structure (blue). (c) Adjoint variable method: the gradients of the objective function with respect to design parameters are computed through two simulations. The forward one (black) and the adjoint in which the source depends on the original fields and objective function and the corresponding time-reversed simulation (blue). (d) AI-based methods: a DNN maps the relationship between permittivity and output fields (black). The loss is computed as in (b) and backpropagated through the network (blue).
Figure 7:
Figure 7:
Different modalities for 3D optical circuitry. (a) Multilayer computer-generated optical volume element as an interconnect working in the optical domain printed by two-photon polymerization. The scale bar measures 20 μm (Taken from [72], Copyright De Gruyter). (b) Waveguide interconnects with complex 3D routing to perform image-processing filters (Taken from [80], Copyright Optica). (c) Diffractive deep neural network for various classification tasks experimentally demonstrated in the THz regime (Taken from [81], Copyright AAAS). (d) Volumetric element optimized by adjoint method for wavelength and polarization sorting experimentally demonstrated in the THz regime (Taken from [90], Copyright Optica).

References

    1. Wetzstein G., Ozcan A., Gigan S., et al. Inference in artificial intelligence with deep optics and photonics. Nature . 2020;588(7836):39–47. doi: 10.1038/s41586-020-2973-6. - DOI - PubMed
    1. Bogaerts W., Pérez D., Capmany J., et al. Programmable photonic circuits. Nature . 2020;586(7828):207–216. doi: 10.1038/s41586-020-2764-0. - DOI - PubMed
    1. Shen Y., Harris N. C., Skirlo S., et al. Deep learning with coherent nanophotonic circuits. Nat. Photonics . 2017;11(7):441–446. doi: 10.1038/nphoton.2017.93. - DOI
    1. Feldmann J., Youngblood N., Karpov M., et al. Parallel convolutional processing using an integrated photonic tensor core. Nature . 2021;589(7840):52–58. doi: 10.1038/s41586-020-03070-1. - DOI - PubMed
    1. Xu X., Ren G., Feleppa T., et al. Self-calibrating programmable photonic integrated circuits. Nat. Photonics . 2022;16(8):595–602. doi: 10.1038/s41566-022-01020-z. - DOI

LinkOut - more resources