Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jul-Sep;16(3):424-435.
doi: 10.1109/TOH.2023.3303838. Epub 2023 Sep 19.

Representational Similarity Analysis for Tracking Neural Correlates of Haptic Learning on a Multimodal Device

Representational Similarity Analysis for Tracking Neural Correlates of Haptic Learning on a Multimodal Device

Alix S Macklin et al. IEEE Trans Haptics. 2023 Jul-Sep.

Abstract

A goal of wearable haptic devices has been to enable haptic communication, where individuals learn to map information typically processed visually or aurally to haptic cues via a process of cross-modal associative learning. Neural correlates have been used to evaluate haptic perception and may provide a more objective approach to assess association performance than more commonly used behavioral measures of performance. In this article, we examine Representational Similarity Analysis (RSA) of electroencephalography (EEG) as a framework to evaluate how the neural representation of multifeatured haptic cues changes with association training. We focus on the first phase of cross-modal associative learning, perception of multimodal cues. A participant learned to map phonemes to multimodal haptic cues, and EEG data were acquired before and after training to create neural representational spaces that were compared to theoretical models. Our perceptual model showed better correlations to the neural representational space before training, while the feature-based model showed better correlations with the post-training data. These results suggest that training may lead to a sharpening of the sensory response to haptic cues. Our results show promise that an EEG-RSA approach can capture a shift in the representational space of cues, as a means to track haptic learning.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
We examine EEG-RSA as a framework to evaluate how the neural representation of multifeatured, haptic cues changes with association training. By developing two hypothetical models, first we evaluate if the neural representation of haptic cues before training is more correlated to perceptual confusion between the cues. Then, we evaluate if the neural representation of haptic cues after training is more reflective of the unique haptic features of the cues themselves.
Fig. 2.
Fig. 2.
(Left) The MISSIVE with actuation components that correspond to unique MISSIVE features, shown in the middle, highlighted. (Middle) MISSIVE cue features and corresponding timings. (Right) The four locations on the arm where vibration can occur.
Fig. 3.
Fig. 3.
The nine haptic cues, delivered via the MISSIVE, considered throughout the use-case experimental analysis. Corresponding features and timing profiles are shown for each cue.
Fig. 4.
Fig. 4.
Normalized distributions of perceptual responses to the nine haptic cues of interest, based on results from a previous behavioral study considering 32 MISSIVE cues. (Top) The left shows the distribution of perceived responses to Cue 1, where subjects could respond out of 32 cues, and the right shows the normalized distribution where perceived responses were only included in the distribution if they were out of one of the nine cues of interest. (Bottom) Normalized distributions of perceived responses to each of the nine haptic cues.
Fig. 5.
Fig. 5.. The Perceptual Model.
Each cell holds the proportion (e.g. confusion ratio) that the presentation of each of the nine cues is perceived as every other cue of interest.
Fig. 6.
Fig. 6.. The Feature-Based Model.
Each cell holds the number of different features between each cue pair, giving a value for how similar each haptic cue is from every other one.
Fig. 7.
Fig. 7.
(Top) Example trial showing response epoch 200ms before and 800ms after the presentation of Cue 5, on Channel 1 of a 30-channel EEG recording. (Bottom) Each trial response recorded is averaged together to compute the overall Channel ERP, for each channel separately. These channel ERPs are then combined to form a 30-channel ERP waveform response, in this case in response to Cue 5.
Fig. 8.
Fig. 8.
RSA enables us to reduce high dimensional neural signals of unique stimuli into a 2D representational space, the brain-based similarity matrix. Brain-based similarity matrices represent how similar the brain response to a particular stimulus is to every other stimulus considered. Populating the representational similarity matrix at each time point allows us to determine how this space changes with time.
Fig. 9.
Fig. 9.. Bootstrap Permutation Analysis Schematic.
(A) For each haptic cue, the original 30-channel EEG Recordings from both conditions are pooled into one group and then relabeled as coming from the pre- or post-condition. This is completed 1000 times, and for each permuted data set, the average ERP waveform response is computed. (B) Brain to Model Correlation Results versus Time. This shows the Observed correlation results between the Feature-Based Model and brain-based representational space of cues before and after training, as well as a few example permutations of this result. These are determined by correlating the Feature-Based Model to each permuted brain-based representational space. (C) Permutation Test Statistic (r-difference) versus Time. For each Brain to Model Correlation, the difference between the post- and pre-correlation time series was calculated to determine the test statistic at every time bin. This was completed for the observed and permuted data sets. Because the Feature-Based Model was considered for the comparisons shown in 9B, the r-difference value was calculated as rpostrpre. (D) Histogram Distribution of Permuted r-difference Values and Significance Testing. This shows the distribution of the permuted test statistics, at an example time bin. At this time bin, the observed test statistic falls within the top 2.5% of the distribution, so results are considered to be significant with correction.
Fig. 10.
Fig. 10.
Perceptual Model correlation results showing correlations between the Perceptual Model and brain-based representational space of cues before and after training. The orange time series shows the pre-training correlation results rpre between the brain-based RDMs of the pre-condition and the Perceptual Model space, and the blue time series shows the post-training correlation results rpost between the brain-based RDMs of the post-condition and the Perceptual Model space. Significant correlation between the pre-training RDMs and Perceptual Model, before correction, is marked by the red significance bar. Significance after correction is marked by an asterisk.
Fig. 11.
Fig. 11.
Feature-Based Model correlation results showing correlations between the Feature-Based Model and brain-based representational space of cues before and after training. The pre-training correlation results rpre between the brain-based RDMs of the pre-condition and the Feature-Based Model (orange) and the post-training correlation results rpost between the brain-based RDMs of the post-condition and the Feature-Based Model (blue) are presented. Significant correlations between the post-training RDMs and Feature-Based Model, before correction, are marked by blue significance bars. Periods of significance with correction are marked by asterisks.

References

    1. Dunkelberger N, Sullivan JL, Bradley J, Manickam I, Dasarathy G, Baraniuk R, and O’Malley MK, “A multisensory approach to present phonemes as language through a wearable haptic device,” IEEE Transactions on Haptics, vol. 14, no. 1, pp. 188–199, 2021. - PubMed
    1. Reed CM, Tan HZ, Jiao Y, Perez ZD, and Wilson EC, “Identification of words and phrases through a phonemic-based haptic display: Effects of inter-phoneme and inter-word interval durations,” ACM transactions on applied perception, vol. 18, no. 3, pp. 1–22, 2021.
    1. Reed CM, Tan HZ, Perez ZD, Wilson EC, Severgnini FM, Jung J, Martinez JS, Jiao Y, Israr A, Lau F, Klumb K, Turcott R, and Abnousi F, “A phonemic-based tactile display for speech communication,” IEEE Transactions on Haptics, vol. 12, no. 1, pp. 2–17, 2019. - PubMed
    1. Luzhnica G, Veas E, and Pammer V, “Skin Reading: Encoding Text in a 6-Channel Haptic Display,” in Proceedings of the International Symposium on Wearable Computers, 2016, pp. 148–155.
    1. Zhao S, Israr A, Lau F, and Abnousi F, “Coding tactile symbols for phonemic communication,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ser. CHI ‘18. New York, NY, USA: Association for Computing Machinery, 2018, p. 1–13.

Publication types

LinkOut - more resources