Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Dec 27;21(1):113.
doi: 10.3390/s21010113.

Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition

Affiliations

Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition

Ghazal Rouhafzay et al. Sensors (Basel). .

Abstract

Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture.

Keywords: 3D object recognition; Barrett Hand; convolutional neural networks; force-sensing resistor; machine intelligence; tactile sensors; transfer learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
(a) Force-sensing resistor array, (b) example of tactile data, (c) simulated tactile sensor for the virtual environment, and (d) an example of a simulated tactile image.
Figure 2
Figure 2
(a) An example of 70 by 70 generated RGB tactile image from the BiGS (BioTac Grasp Stability) dataset, and (b) an example of 7 by 3 instantaneous electrode reading.
Figure 3
Figure 3
Accuracy differences between convolutional neural networks (CNNs) with frozen weights and CNNs with fine-tuned weights.
Figure 4
Figure 4
Average normalized weight differences between CNNs with frozen weights and CNNs with fine-tuned weights.
Figure 5
Figure 5
Architecture of the proposed hybrid visuotactile object recognition stage.
Figure 6
Figure 6
Confusion matrices for visuotactile hybrid object recognition architecture for (a) visual data and (b) tactile data.

References

    1. Lacey S., Esathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front. Psychol. 2014;5:730. doi: 10.3389/fpsyg.2014.00730. - DOI - PMC - PubMed
    1. Amedi A., Malach R., Hendler T., Peled S., Zohary E. Visuo-haptic object-related activation in the ventral visual pathway. Nat. Neurosci. 2001;4:324–330. doi: 10.1038/85201. - DOI - PubMed
    1. Desmarais G., Meade M., Wells T., Nadeau M. Visuo-haptic integration in object identification using novel objects. Atten. Percept. Psychophys. 2017;79:2478–2498. doi: 10.3758/s13414-017-1382-x. - DOI - PubMed
    1. Yau J.M., Kim S.S., Thakur P.H., Bensmaia S.J. Feeling form: The neural basis of haptic shape perception. J. Neurophysiol. 2015;115:631–642. doi: 10.1152/jn.00598.2015. - DOI - PMC - PubMed
    1. James T.W., Kim S., Fisher J.S. The neural basis of haptic object processing. Can. J. Exp. Psychol. 2007;61:219–229. doi: 10.1037/cjep2007023. - DOI - PubMed

LinkOut - more resources