Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jan 10;13(1):79.
doi: 10.1038/s41467-021-27672-z.

A star-nose-like tactile-olfactory bionic sensing array for robust object recognition in non-visual environments

Affiliations

A star-nose-like tactile-olfactory bionic sensing array for robust object recognition in non-visual environments

Mengwei Liu et al. Nat Commun. .

Abstract

Object recognition is among the basic survival skills of human beings and other animals. To date, artificial intelligence (AI) assisted high-performance object recognition is primarily visual-based, empowered by the rapid development of sensing and computational capabilities. Here, we report a tactile-olfactory sensing array, which was inspired by the natural sense-fusion system of star-nose mole, and can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input. The tactile-olfactory information is processed by a bioinspired olfactory-tactile associated machine-learning algorithm, essentially mimicking the biological fusion procedures in the neural system of the star-nose mole. Aiming to achieve human identification during rescue missions in challenging environments such as dark or buried scenarios, our tactile-olfactory intelligent sensing system could classify 11 typical objects with an accuracy of 96.9% in a simulated rescue scenario at a fire department test site. The tactile-olfactory bionic sensing system required no visual input and showed superior tolerance to environmental interference, highlighting its great potential for robust object recognition in difficult environments where other methods fall short.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1. Bioinspired tactile-olfactory associated intelligent sensory system.
a Schematic illustration of the bio-sensory perceptual system in the star-nosed mole (left) and biomimicking intelligent sensory system (right). Top left: diagram of the unique structure of the star-shaped nose. Bottom left: Scheme showing the processing hierarchy of tactile and olfactory information in the neural system of the star-nosed mole. Blue and red areas represent the processing region (PA, primary area; AA, association area) for tactile and olfactory information, respectively. Blue and red arrows represent the direction of tactile and olfactory information flow; the purple arrow shows the information flow of the multisensory fusion. Top right: schematic diagram of the sensing array on the mechanical hand, including force and olfactory sensors. Bottom right: illustration of artificial neural networks. Images of the mechanical hand with scale bar of 2 cm (b) and eleven different objects to be identified, scale bar: 3 cm (c). Objects to be tested can be divided into five categories including Human (H, main target), Olfactory interference (O, e.g., worn clothes), Tactile interference (T, e.g., mouse), Soft objects (S, e.g., fruits), and Rigid objects (R, e.g., debris). d The machine learning framework consists of three connected layers of neural networks that mimic the multisensory fusion process hierarchy. Top left: early tactile information processing. Bottom left: early olfactory information processing. Right: neural network resembling the high-level fusion of tactile and olfactory interactions.
Fig. 2
Fig. 2. Characterization of tactile and olfactory sensing array.
a The design of the array architecture (i) shows the location of the 14 force sensors on each fingertip (ii), along with the location of the six gas sensors on the palm (iii). The Si-based force (iv) and gas (v) sensors are fabricated using microelectromechanical systems techniques and integrated on flexible printed circuits. Blue area: force-sensitive area (single-crystalline silicon beam) with scale bar of 200 μm. Red area: area modified by gas-sensitive material with scale bar of 400 μm. b The sensitivity of the force and gas sensors. n = 12 for each group. The error bars denote standard deviations of the mean. Top: the output voltage response of the force sensor under gradient pressure loading. Bottom: The normalized resistance response of the gas sensor in the continuously increasing ethanol gas concentration. c In a typical touching process, the fingers increasingly get closer to the object until the point of contact (i. reach phase) and experiences a sudden rise in tactile forces as the object is touched (ii. load phase). Then the hand would hold the object for a certain time (iii. hold phase) and at last release the object (iv. release phase). d Response curve of the force sensor during the process of contacting three different objects. When the same force is applied to the objects by the mechanical hand, the deformation degree of the objects varies according to their different elasticity modulus, changing the local contact area and consequent reactive pressure. e The normalized resistance response curve of the gas sensor during the process of contact and separation from the detected gas flow. f Photograph of the mechanical hand touching a human arm, scale bar: 2 cm. g The tactile mappings at three different feature points in (c). Each contains 70 pixels with 14 pixels for each fingertip. h The hexagonal olfactory mappings of three different objects including an arm, worn clothes, and an orange.
Fig. 3
Fig. 3. BOT associated learning for object classification.
a Scheme showing how tactile and olfactory information is processed and fused in the BOT associated learning architecture. 512D, 512-dimensional vector; 100D, 100-dimensional vector. b Confusion matrix of the sole tactile recognition strategy. c Confusion matrix of the BOT-M recognition strategy. The full name of the abbreviation: Org-Orange; Twl-Towel; Arm-Arm; Stn-Stone; Can-Can; Hir-Hair; Leg-Leg; Ms-Mouse; Clth-Worn Clothes; Mug-Mug; Ctn-Carton. d BOT-M associated learning shows the best accuracy among the unimodal (tactile and olfactory) and multimodal fusion strategies (BOT, BOT-R, BOT-F, BOT-M). n = 10 for each group. The error bars denote standard deviations of the mean. Unimodal strategies: olfactory-based recognition using only olfactory data and tactile-based recognition using only tactile data. Multimodal fusion strategies using both tactile and olfactory data: BOT associated learning fusion, fusion based on random points (BOT-R), fusion based on feature point selection (BOT-F), fusion based on feature point selection and multiplication (BOT-M). The final recognition accuracies are 66.7, 81.9, 91.2, 93.8, 94.7, and 96.9% for olfactory, tactile, BOT, BOT-R, BOT-F, and BOT-M strategies, respectively. e The change in recognition accuracy of the BOT-M neural network with the increase of training cycles. Inset: Loss of function variation during the training process. f The testing results of tactile-, olfactory-, and BOT-M-based strategies under defective tactile and olfactory information with various Gaussian noises (0.05, 0.1, 0.15, and 0.2) show that BOT-M can maintain higher recognition accuracy with increased noise levels compared to unimodal fusion strategies.
Fig. 4
Fig. 4. Human recognition in a hazardous environment based on BOT.
a Schematic illustration of the testing system and scenarios. (i) Scheme showing the system consisting of a computer, a wireless data transmission module, a data pre-processing circuit board, and a mechanical hand. (ii) Four different hazardous application scenarios including gas interference, buried objects, partially damaged tactile sensors, and simulated rescue mission. b IR photographs of the mechanical hand holding different objects (i. an arm and ii. other objects) with various gas interference, scale bar: 4 cm. c Recognition accuracy for different parts of the human body under the interference of various gas concentrations (50, 100, 150, and 200 ppm) of acetone and ammonia using olfactory-based recognition and BOT-M associated learning. d Photograph of arm recognition underneath the debris, scale bar: 1.5 cm. e The change of arm recognition accuracy as burial level continues to increase using tactile-based recognition, BOT, and BOT-M associated learning. Inset: Photos of one finger (Top) and four fingers (Bottom) of the mechanical hand being blocked from touching the arm, scale bar: 5 cm. f Scheme showing damage to random parts of both force and gas sensors in the tactile array. g The accuracy for arm recognition with different sensor damage rates using tactile-based recognition and BOT-M associated learning. n = 10 for each group. The error bars denote standard deviations of the mean. h Photograph of a volunteer’s leg buried underneath debris and a robotic arm performing the rescue mission, scale bar: 8 cm. i Flow diagram shows the decision-making strategy for human recognition and rescue. Inset: Photograph of the robotic arm removing the debris, scale bar: 10 cm. j The variation of leg/debris classification vector and the alleviated burial degree while the covering debris being gradationally removed by the robotic arm.

Similar articles

Cited by

References

    1. Rudovic O, Lee J, Dai M, Schuller B, Picard RW. Personalized machine learning for robot perception of affect and engagement in autism therapy. Sci. Robot. 2018;3:eaao6760. - PubMed
    1. Ficuciello F, Migliozzi A, Laudante G, Falco P, Siciliano B. Vision-based grasp learning of an anthropomorphic hand-arm system in a synergy-based control framework. Sci. Robot. 2019;4:eaao4900. - PubMed
    1. Li G, Liu S, Wang L, Zhu R. Skin-inspired quadruple tactile sensors integrated on a robot hand enable object recognition. Sci. Robot. 2020;5:eabc8134. - PubMed
    1. Hill MQ, et al. Deep convolutional neural networks in the face of caricature. Nat. Mach. Intell. 2019;1:522–529.
    1. Wang M, et al. Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors. Nat. Electron. 2020;3:563–570.

Publication types