Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Apr 2;11(1):1637.
doi: 10.1038/s41467-020-15460-0.

Plasmonic ommatidia for lensless compound-eye vision

Affiliations

Plasmonic ommatidia for lensless compound-eye vision

Leonard C Kogos et al. Nat Commun. .

Abstract

The vision system of arthropods such as insects and crustaceans is based on the compound-eye architecture, consisting of a dense array of individual imaging elements (ommatidia) pointing along different directions. This arrangement is particularly attractive for imaging applications requiring extreme size miniaturization, wide-angle fields of view, and high sensitivity to motion. However, the implementation of cameras directly mimicking the eyes of common arthropods is complicated by their curved geometry. Here, we describe a lensless planar architecture, where each pixel of a standard image-sensor array is coated with an ensemble of metallic plasmonic nanostructures that only transmits light incident along a small geometrically-tunable distribution of angles. A set of near-infrared devices providing directional photodetection peaked at different angles is designed, fabricated, and tested. Computational imaging techniques are then employed to demonstrate the ability of these devices to reconstruct high-quality images of relatively complex objects.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1. Compound eyes.
a Micrograph of the compound eyes of a horse fly. Used with permission (copyright: Michael Biehler/123RF.COM). b Schematic illustration of the apposition compound-eye architecture. c Artificial compound-eye camera based on a planar microlens array and a photodetector array separated by a glass plate. By design the two arrays have different periodicities, so that each sensor detects light incident along a different direction. d Compound-eye camera based on the angle-sensitive metasurfaces developed in the present work, where only light incident along a different direction is transmitted into each image sensor.
Fig. 2
Fig. 2. Angle-sensitive metasurfaces.
a, b Schematic illustrations of the metasurface geometry and principle of operation. Light incident at the desired detection angle +θp (a) is diffracted by the grating coupler into SPPs propagating towards the slits, where they are preferentially scattered into the absorbing substrate. Light incident at the opposite angle −θp (b) is diffracted by the NP array into SPPs propagating toward the grating reflector, where they are diffracted back into radiation. Light incident at any other angle is instead completely reflected or diffracted away from the surface. c Calculated optical transmission coefficient at λ0 = 1550 nm through six different metasurfaces for p polarized light versus angle of incidence θ on the x–z plane. The grating coupler period (number of NPs) ranges from 1465 to 745 nm (15 to 29) in order of increasing angle of peak detection. The NP widths vary between 250 and 570 nm. df Transmission coefficient of three metasurfaces from c as a function of both polar θ and azimuthal ϕ illumination angles, summed over xz and yz polarizations. In each map, kx and ky are the in-plane components of the incident-light wavevector, and the color scale is normalized to the maximum (MAX) transmission value. In e, the solid red circle of radius kSPP indicates the available SPP modes on the top metal surface; the dashed curved line highlights the incident directions of peak transmission; the horizontal grey arrows (having length 2π/Λ) illustrate how light incident along these directions can excite SPPs by negative-first-order diffraction; and the red arrows show the directions of propagation of the excited SPPs.
Fig. 3
Fig. 3. Measurement results.
ac Optical (a) and SEM (b, c) images of representative experimental samples. The scale bar is 100 μm in a, 4 μm in b, and 2 μm in c. In a the entire metasurface of a complete device is seen through a Ti window covering the entire sample, which is introduced to avoid spurious photocurrent signals. The image of c was taken before fabrication of the NP array. dg Measured angular dependence of the photocurrent of four devices based on the structures of Fig. 2, providing peak response near θp = 0° (d), 12° (e), 28° (f), and 65° (g). In each plot, the photocurrent is normalized to the peak value. SEM images reveal some deviations in the array periods and NP widths from their target design values. The measured values are Λ = 1440, 1180, 1030, and 775 nm and w = 240, 560, 526, and 256 nm for the devices of panels d, e, f, and g, respectively. hk Line scans along the ϕ = 0° direction from the maps of dg, respectively. l p- (i.e., xz-) and s- (i.e., yz-) polarized responsivity versus polar angle of incidence on the x–z plane, measured with three different samples: a reference device without any metal film and NP array, and two metasurface-coated devices providing peak detection at θp = 12° and 65°, respectively. Source data for panels dg are provided as Source Data files.
Fig. 4
Fig. 4. Data acquisition and image reconstruction.
a Schematic illustration of the imaging geometry. Each pixel integrates the incident light intensity from different directions according to its angular response. b Image-formation model. The pixel-array measurement is related to the object by a linear equation y = Ax, where the sensing matrix A contains the angular responses of all pixels. cf Representative object (c) and corresponding image reconstruction results at SNR = 56 dB (df). gj Example of a more complex object (g) and corresponding image reconstruction results at SNR = 73 dB (hj). The original cameraman image (g) is used with permission from its copyright owner (Massachusetts Institute of Technology). The images of d, h are based on the simulated responsivity patterns of Fig. 2 with a 6240-pixel array at λ0 = 1550 nm. The images of e and i are based on the experimental responsivity patterns of Fig. 3 with a 5280-pixel array at λ0 = 1550 nm. The images of f and j are based on the simulated patterns under broadband illumination with bandwidth δλ/λ0 = 10 % (f) and 5 % (j). The image reconstruction algorithm is made publicly available [10.5281/zenodo.3634939].

Similar articles

Cited by

References

    1. Land, M. F. & Nilsson, D. E. Animal Eyes. (Oxford University Press, Oxford, 2002) .
    1. Tanida J, et al. Thin observation module by bound optics (TOMBO): concept and experimental verification. Appl. Opt. 2001;40:1806–1813. doi: 10.1364/AO.40.001806. - DOI - PubMed
    1. Duparré J, Dannberg P, Schreiber P, Bräuer A, Tünnermann A. Thin compound-eye camera. Appl. Opt. 2005;44:2949–2956. doi: 10.1364/AO.44.002949. - DOI - PubMed
    1. Wu S, et al. Artificial compound eye: a survey of the state-of-the-art. Artif. Intell. Rev. 2017;48:573–603. doi: 10.1007/s10462-016-9513-7. - DOI
    1. Jeong K-H, Kim J, Lee LP. Biologically inspired artificial compound eyes. Science. 2006;312:557–561. doi: 10.1126/science.1123053. - DOI - PubMed

Publication types