Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Oct 16:14:509364.
doi: 10.3389/fnins.2020.509364. eCollection 2020.

Decoding Kinematic Information From Primary Motor Cortex Ensemble Activities Using a Deep Canonical Correlation Analysis

Affiliations

Decoding Kinematic Information From Primary Motor Cortex Ensemble Activities Using a Deep Canonical Correlation Analysis

Min-Ki Kim et al. Front Neurosci. .

Abstract

The control of arm movements through intracortical brain-machine interfaces (BMIs) mainly relies on the activities of the primary motor cortex (M1) neurons and mathematical models that decode their activities. Recent research on decoding process attempts to not only improve the performance but also simultaneously understand neural and behavioral relationships. In this study, we propose an efficient decoding algorithm using a deep canonical correlation analysis (DCCA), which maximizes correlations between canonical variables with the non-linear approximation of mappings from neuronal to canonical variables via deep learning. We investigate the effectiveness of using DCCA for finding a relationship between M1 activities and kinematic information when non-human primates performed a reaching task with one arm. Then, we examine whether using neural activity representations from DCCA improves the decoding performance through linear and non-linear decoders: a linear Kalman filter (LKF) and a long short-term memory in recurrent neural networks (LSTM-RNN). We found that neural representations of M1 activities estimated by DCCA resulted in more accurate decoding of velocity than those estimated by linear canonical correlation analysis, principal component analysis, factor analysis, and linear dynamical system. Decoding with DCCA yielded better performance than decoding the original FRs using LSTM-RNN (6.6 and 16.0% improvement on average for each velocity and position, respectively; Wilcoxon rank sum test, p < 0.05). Thus, DCCA can identify the kinematics-related canonical variables of M1 activities, thus improving the decoding performance. Our results may help advance the design of decoding models for intracortical BMIs.

Keywords: Kalman filter; decoding algorithm; deep canonical correlation analysis; intracortical brain–machine interface; long short-term memory recurrent neural network; primary motor cortex (M1).

PubMed Disclaimer

Figures

FIGURE 1
FIGURE 1
Simulation overview for assessing the effects of DCCA on two decoders. (A) Behavioral tasks for each dataset. The left panel denotes a center-out reaching task which the monkey C performed, and the right panel is a sequential reaching task which the monkey M performed. (B) The schematic diagram depicts the DCCA between firing rates and kinematic variables. The left inputs (L-input) of the networks indicate the naïve firing rates and the right inputs (R-input) of the networks denote the kinematic variables: x- and y-velocity, and speed. A dotted-line box between the networks denotes a canonical correlation analysis (CCA) between the left-canonical variables (ZDCV) and the right-canonical variables (XDCV). (C) The block diagram depicts a simulation paradigm for a comparative study of decoding. (D) Prediction errors for the state dimensionalities (q) of each dataset. The filled circle denotes proper dimensionality corresponding to the minimum prediction error for each dimensionality reduction method (Yu et al., 2009). Each color code denotes the dimensionality reduction method.
FIGURE 2
FIGURE 2
Correlations between canonical variables. (A) Correlations between canonical variables extracted by LCCA (ZLCV and XLCV). (B) Correlations between canonical variables extracted by DCCA (ZDCV and XDCV). The upward-pointing triangles denote the samples per time step of the canonical variables. ρ denotes the Pearson’s correlation coefficient and p indicates to exist a significant linear regression relationship between X and Z. Each row corresponds to each dimensionality of the canonical variables. The orange triangles denote the dataset CRT and the blue triangles represent the dataset SRT.
FIGURE 3
FIGURE 3
Estimation of neural representations by linear velocity tuning models (testing data). Single traces of the actual neural representations over time in each trial of the test data (gray lines) are superimposed by the corresponding estimates by the linear velocity tuning model (red lines). Here, we present the representative traces of neural representations that were most accurately estimated by the linear velocity tuning models yielding the highest r2, where r2 denotes the goodness-of-fit of the linear velocity tuning model. The top row indicates the estimation of ZE–FR in each dataset (CRT and SRT). From the second to fourth rows are the estimations of ZPCA, ZFA, and ZLDS in each dataset. The bottom two rows denote the estimation of ZLCV and ZDCV. Column (A) and (B) correspond to dataset CRT and SRT, respectively.
FIGURE 4
FIGURE 4
Velocity tuning properties of neuronal canonical variables estimated by the neural representations. (A,B) The points denote the linear velocity tuning quality (r2) for all dimensions of the input variables (ZE–FR, ZPCA, ZFA, ZLDS, ZLCV, and ZDCV). The red horizontal line denotes the averaged r2 of all dimensions. Black left-pointing pointer denotes a 95% confidence level of each neural representation’s r2. (C,D) Each panel depicts the topographical map of the input variable to the kinematic variables, such as velocity (v). In this case, each panel corresponds to the best-tuned dimensionality showing high r2.
FIGURE 5
FIGURE 5
The relationship between training error and average r2 of velocity-tuning for each dimensionality of the neural representations. Each colored circle corresponds to the mean of r2 and training error for a neural representation (ZE–FR, ZPCA, ZFA, ZLDS, ZLCV, and ZDCV). The (A) top and (B) bottom panel correspond to the datasets CRT and SRT.
FIGURE 6
FIGURE 6
Decoded velocity trajectory from each pair of the variables (testing data). Each column denotes the decoded (X- and Y-axis) velocity trajectories according to the predictors: ZE–FR, ZPCA, ZFA, ZLDS, ZLCV, and ZDCV. The solid gray lines denote the actual velocity, and the solid red and blue lines depict the outputs of linear model and LSTM-RNN, respectively. For linear model, LKF was used for ZE–FR, ZPCA, ZFA, ZLCV, and ZDCV, whereas NDF (linear filter) was used for ZLDS. The vertical gray lines denote boundary between trial intervals for the reaching. The top (A) and bottom (B) panel correspond to the datasets CRT and SRT.
FIGURE 7
FIGURE 7
Reconstructed position trajectory in the dataset CRT (testing data). Each panel denotes the reconstructed position trajectories according to the predictors: ZE–FR, ZPCA, ZFA, ZLDS, ZLCV, and ZDCV. Solid gray lines denote the true position trajectories, red lines denote the position trajectories reconstructed from the output of linear model, and blue lines denote the position trajectories from the output of LSTM-RNN. For linear model, LKF was used for ZE–FR, ZPCA, ZFA, ZLCV, and ZDCV, whereas NDF (linear filter) was used for ZLDS. The filled yellow circle denotes the home position (0, 0) from which non-human primates started to move their hands. Solid lines denote the averaged position trajectories across the trials, and shaded lines denote the standard errors across 44 trials with respect to each direction.
FIGURE 8
FIGURE 8
Comparison of the decoding error for the velocity between the neural representations for each decoder. The mean error of decoding the hand velocity (A) and reconstructing the hand position (B) from decoded velocity [from six different neural representations (i.e., ZE–FR, ZPCA, ZFA, ZLDS, ZLCV, and ZDCV)] (see the text for the descriptions of neural representations) using decoders [linear model (orange), and LSTM-RNN (purple)]. For linear model, LKF was used for ZE–FR, ZPCA, ZFA, ZLCV, and ZDCV, whereas NDF (linear filter) was used for ZLDS. The vertical lines indicate the standard error, and the asterisks denote the significantly different relationship [p < 0.05, ∗∗p < 0.01, Friedman test with the multiple comparisons (with Bonferroni correction)]. The left and right columns correspond to the dataset CRT and SRT, respectively.
FIGURE 9
FIGURE 9
Comparison of the decoding error for the velocity and reconstructed position between neural representations for all decoders. The mean error (open squares) of decoding the hand (A) velocity and (B) position from the six different neural representations (i.e., ZE–FR, ZPCA, ZFA, ZLDS, ZLCV, and ZDCV) (see the text for descriptions of neural representations) using decoders [linear model (red), and LSTM-RNN (blue)]. For linear model, LKF was used for ZE–FR, ZPCA, ZFA, ZLCV, and ZDCV, whereas NDF (linear filter) was used for ZLDS. The vertical lines indicate the standard error, and the asterisks denote the significantly different relationship [p < 0.05, ∗∗p < 0.01, a two-way Friedman test with the multiple comparisons (with Bonferroni correction)]. The left and right columns correspond to the dataset CRT and SRT, respectively.

References

    1. Aflalo T., Kellis S., Klaes C., Lee B., Shi Y., Pejsa K., et al. (2015). Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science 348 906–910. 10.1126/science.aaa5417 - DOI - PMC - PubMed
    1. Aggarwal V., Acharya S., Tenore F., Shin H. C., Etienne-Cummings R., Schieber M. H., et al. (2008). Asynchronous decoding of dexterous finger movements using M1 neurons. IEEE Trans. Neural. Syst. Rehabil. Eng. 16 3–14. 10.1109/TNSRE.2007.916289 - DOI - PMC - PubMed
    1. Ahmadi N., Constandinou T. G., Bouganis C.-S. (2019). “Decoding Hand Kinematics from Local Field Potentials Using Long Short-Term Memory (LSTM) Network,” in 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER) (San Francisco, CA: Cornell University; ), 415–419. 10.1109/NER.2019.8717045 - DOI
    1. Ames K. C., Ryu S. I., Shenoy K. V. (2014). Neural dynamics of reaching following incorrect or absent motor preparation. Neuron 81 438–451. 10.1016/j.neuron.2013.11.003 - DOI - PMC - PubMed
    1. Anderson T. W. (1984). An Introduction to Multivariate Statistical Analysis, 2nd Edn New Jersey: John Wiley and Sons.

LinkOut - more resources