Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb 23;23(5):2480.
doi: 10.3390/s23052480.

A Sparse Representation Classification Scheme for the Recognition of Affective and Cognitive Brain Processes in Neuromarketing

Affiliations

A Sparse Representation Classification Scheme for the Recognition of Affective and Cognitive Brain Processes in Neuromarketing

Vangelis P Oikonomou et al. Sensors (Basel). .

Abstract

In this work, we propose a novel framework to recognize the cognitive and affective processes of the brain during neuromarketing-based stimuli using EEG signals. The most crucial component of our approach is the proposed classification algorithm that is based on a sparse representation classification scheme. The basic assumption of our approach is that EEG features from a cognitive or affective process lie on a linear subspace. Hence, a test brain signal can be represented as a linear (or weighted) combination of brain signals from all classes in the training set. The class membership of the brain signals is determined by adopting the Sparse Bayesian Framework with graph-based priors over the weights of linear combination. Furthermore, the classification rule is constructed by using the residuals of linear combination. The experiments on a publicly available neuromarketing EEG dataset demonstrate the usefulness of our approach. For the two classification tasks offered by the employed dataset, namely affective state recognition and cognitive state recognition, the proposed classification scheme manages to achieve a higher classification accuracy compared to the baseline and state-of-the art methods (more than 8% improvement in classification accuracy).

Keywords: brain computer interfaces; electroencephalography; neuromarketing; sparse representation classification.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Averaged classification accuracy (with standard error) between the least and most preferred products.
Figure 2
Figure 2
Overall accuracy and confusion matrices for each method with respect to products’ preferences. Each matrix provides the overall performance of each classifier with respect to each class (in our case, product’s preferences). Furthermore, class-wise precision (last two separated columns on the right) and class-wise recall (last two separated rows on the bottom) are provided.
Figure 2
Figure 2
Overall accuracy and confusion matrices for each method with respect to products’ preferences. Each matrix provides the overall performance of each classifier with respect to each class (in our case, product’s preferences). Furthermore, class-wise precision (last two separated columns on the right) and class-wise recall (last two separated rows on the bottom) are provided.
Figure 3
Figure 3
Overall accuracy and confusion matrices for each method with respect to which product the participant views. These matrices provide the performance of each classifier with respect to each class (in our case participant views). Furthermore, the class-wise precision (last two separated columns on the right) and class-wise recall (last two separated rows on the bottom) are provided.
Figure 3
Figure 3
Overall accuracy and confusion matrices for each method with respect to which product the participant views. These matrices provide the performance of each classifier with respect to each class (in our case participant views). Furthermore, the class-wise precision (last two separated columns on the right) and class-wise recall (last two separated rows on the bottom) are provided.
Figure 4
Figure 4
Averaged classification accuracy by changing the number of training samples from 20 to 160 training samples.

References

    1. Lécuyer A., Lotte F., Reilly R., Leeb R., Hirose M., Slater M. Brain-Computer Interfaces, Virtual Reality, and Videogames. Computer. 2008;41:66–72. doi: 10.1109/MC.2008.410. - DOI
    1. Alimardani M., Hiraki K. Passive Brain-Computer Interfaces for Enhanced Human-Robot Interaction. Front. Robot. AI. 2020;7:125. doi: 10.3389/frobt.2020.00125. - DOI - PMC - PubMed
    1. Gao X., Wang Y., Chen X., Gao S. Interface, interaction, and intelligence in generalized brain–computer interfaces. Trends Cogn. Sci. 2021;25:671–684. doi: 10.1016/j.tics.2021.04.003. - DOI - PubMed
    1. Ramadan R.A., Vasilakos A.V. Brain computer interface: Control signals review. Neurocomputing. 2017;223:26–44. doi: 10.1016/j.neucom.2016.10.024. - DOI
    1. Zander T.O., Kothe C. Towards passive brain–computer interfaces: Applying brain–computer interface technology to human–machine systems in general. J. Neural Eng. 2011;8:025005. doi: 10.1088/1741-2560/8/2/025005. - DOI - PubMed

LinkOut - more resources