Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2013 May 13;13(5):6272-94.
doi: 10.3390/s130506272.

Noise reduction in brainwaves by using both EEG signals and frontal viewing camera images

Affiliations

Noise reduction in brainwaves by using both EEG signals and frontal viewing camera images

Jae Won Bang et al. Sensors (Basel). .

Abstract

Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human-computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Flowchart of proposed system.
Figure 2.
Figure 2.
Proposed device and speller UI system. (a) Proposed device; (b) Example of experimental environment; (c) Speller UI system.
Figure 3.
Figure 3.
The positions of 16 electrodes of the Emotiv EPOC headset.
Figure 4.
Figure 4.
Example of using LDA to obtain the optimal number of feature dimensions from the training data, using both frontal image features and EEG features in the frequency domain.
Figure 5.
Figure 5.
ROC curves for all methods.
Figure 6.
Figure 6.
Example of a Type 1 error and the correct detection of no head movement (person 6 listed in Table 6). (a) EEG signals. (b), (d), (f), and (h) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of Type 1 error. (c), (e), (g), and (i) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of correct detection of no head movement.
Figure 6.
Figure 6.
Example of a Type 1 error and the correct detection of no head movement (person 6 listed in Table 6). (a) EEG signals. (b), (d), (f), and (h) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of Type 1 error. (c), (e), (g), and (i) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of correct detection of no head movement.
Figure 7.
Figure 7.
Example of a Type 1 error and the correct detection of no head movement (person 2 listed in Table 6). (a) EEG signals. (b), (d), (f), and (h) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of Type 1 error. (c), (e), (g), and (i) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of correct detection of no head movement.
Figure 7.
Figure 7.
Example of a Type 1 error and the correct detection of no head movement (person 2 listed in Table 6). (a) EEG signals. (b), (d), (f), and (h) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of Type 1 error. (c), (e), (g), and (i) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of correct detection of no head movement.
Figure 8.
Figure 8.
Example of a Type 2 error and the correct detection of head movement (person 9 listed in Table 6). (a) EEG signals. (b), (d), (f), and (h) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of Type 2 error. (c), (e), (g), and (i) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of correct detection of head movement.
Figure 8.
Figure 8.
Example of a Type 2 error and the correct detection of head movement (person 9 listed in Table 6). (a) EEG signals. (b), (d), (f), and (h) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of Type 2 error. (c), (e), (g), and (i) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of correct detection of head movement.
Figure 9.
Figure 9.
Example of a Type 2 error and correct detection of head movement (person 1 listed in Table 6). (a) EEG signals. (b), (d), (f), and (h) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of Type 2 error. (c), (e), (g), and (i) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of correct detection of head movement.
Figure 9.
Figure 9.
Example of a Type 2 error and correct detection of head movement (person 1 listed in Table 6). (a) EEG signals. (b), (d), (f), and (h) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of Type 2 error. (c), (e), (g), and (i) respectively show signals obtained from FT of EEG signals, pixel difference, edge pixel difference, and motion vectors using LKT in case of correct detection of head movement.
Figure 10.
Figure 10.
Comparisons of accuracies with/without the proposed method in speller UI system.

References

    1. Yuen C.T., San W.S., Rizon M., Seong T.C. Classification of human emotions from EEG signals using statistical features and neural network. Int. J. Integr. Eng. 2009;1:71–79.
    1. Rebsamen B., Guan C., Zhang H., Wang C., Teo C., Ang M.H., Burdet E. A brain controlled wheelchair to navigate in familiar environments. IEEE Trans. Neural Sys. Rehab. Eng. 2010;18:590–598. - PubMed
    1. Zhang B., Wang J., Fuhlbrigge T. A Review of the Commercial Brain-Computer Interface Technology from Perspective of Industrial Robotics. Proceedings of the 2010 IEEE International Conference on Automation and Logistics; Hong Kong and Macau. 16–20 August 2010; pp. 379–384.
    1. Bang J.W., Lee E.C., Park K.R. New computer interface combining gaze tracking and brainwave measurements. IEEE Trans. Consum. Electron. 2011;57:1646–1651.
    1. Campbell A.T., Choudhury T., Hu S., Lu H., Mukerjee M.K., Rabbi M., Raizada R.D. NeuroPhone: Brain-Mobile Phone Interface Using a Wireless EEG Headset. Proceedings of the 2nd ACM SIGCOMM Workshop on Networking, Systems and Applications on Mobile Handhelds; New Delhi, India. 30 August 2010; pp. 3–8.

Publication types