Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Oct 31:11:60.
doi: 10.3389/fnbot.2017.00060. eCollection 2017.

Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback

Affiliations

Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback

Hong Zeng et al. Front Neurorobot. .

Abstract

Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes.

Keywords: augmented reality feedback; brain-machine interface (BMI); closed-loop control; eye tracking; human-robot interaction; hybrid Gaze-BMI.

PubMed Disclaimer

Figures

Figure 1
Figure 1
The block diagram of the proposed hybrid Gaze-BMI based robotic arm control system with AR feedback. Image processing is applied to segment all the potential cuboids from the image of the workspace. The segmented objects can be selected by the subjects via eye tacking and confirmed using the trigger commands from the BMI. The initiation commands from the hybrid Gaze-BMI are used to (1) confirm the object selection by the user, or (2) trigger the switching of action sequence or (3) constantly control the aperture and height of the gripper during the grasping and lifting processes, respectively. AR feedback is provided to the BMI user during the grasping and lifting processes via the monitor. The robotic arm implements the reaching, grasping, lifting, delivering and releasing tasks, in response to the trigger commands obtained from the hybrid Gaze-BMI.
Figure 2
Figure 2
Experimental setup used in this study. The live video of the workspace captured by the camera and the enhanced visual feedback are presented to the user via the monitor. Using the eye-tracking device EyeX, the user can select the object that he/she intends to manipulate. The movement intention can be detected by the BMI device Emotiv EPOC+, which can confirm the user's selection or initiate the control on the selected object. Dobot executes the reaching, grasping, lifting, delivering, and releasing tasks in response to the trigger commands from the user. The enlarged Graphical User Interface, which is programmed in C++ under Qt framework, is shown on the right side of the picture above.
Figure 3
Figure 3
The graz motor imagery BCI stimulation in openvibe. The right arrow and left arrow are used to guide the user to perform the motor imagery and the relax task, respectively.
Figure 4
Figure 4
Mapping of the object coordinates from the image panel to those of the robotic arm workspace. (A) The coordinates of the object in the image panel. (B) The coordinates of the object in the robotic arm workspace.
Figure 5
Figure 5
The maker-based tracking method to calculate the camera pose relative to the real world.
Figure 6
Figure 6
The relation between the aperture of robotic gripper and the angle of the servo.
Figure 7
Figure 7
The process of objects manipulation tasks with and without AR feedback. The area of the gripper is expanded as is shown in (C–L). Reaching: (A) The robotic arm is in the initial position. An object can be selected by the gaze points of the user, and a red rectangle will then appear around the object, indicating that the user is starring at it. (B) The color of the rectangle changes from red to green when the target object is confirmed by the user once the motor imagery state is detected. Next, the robotic arm moves to the position for the subsequent grasping. Grasping (AR): (C) The circle with a letter “G” in it will appear at the bottom of the GUI, indicating that the user has arrived at the grasping phrase. The orientation of the gripper is adjusted automatically based on the orientation of the object in the workspace. The aperture of the gripper is presented to the user based on AR feedback interface via a virtual box near the object. (D) When the selected object has been grabbed tidily, two virtual arrows normal to the gripper are then overlaid over the object, simulating the grasping force. Lifting (AR): (E) the letter in the circle changes from “G” to “M” indicating a successful switching of action sequence from the grasping process to the lifting process. The user can control the gripper moving in the vertical direction to lift the object. The height of the gripper to the table is represented by that of a virtual box in the middle of the obstacle. (F) When the height of the virtual box is higher than the obstacle, it is deemed that the altitude of the robotic arm is enough for a save delivering. Delivering and Releasing: (G) when the lifting process is completed, the user fixates his/her gaze on the target rectangle and performs motor imagery to trigger the robotic arm moving to the target position automatically. Besides, the color of the rectangle around the object changes from green to cyan, indicating a successful action sequence switching. (H) The object is released in the target position. Then Dobot returns to the initial position automatically, waiting for the next trial. Grasping and Lifting (NoAR): (I–L) the grasping and lifting processes without AR feedback, where the hybrid Gaze-BMI user has to decide when to stop the current process by the visual inspection only.
Figure 8
Figure 8
Comparisons of the number of trigger commands and the height gaps in the objects manipulation tasks between the system with AR feedback and those with visual inspection only. The statistically significant performance difference has been marked by “*” (p < 0.05). (A) The number of trigger commands used in the grasping process for each subject. (B) The number of trigger commands used in the lifting process for each subject. (C) The height gaps of gripper for each subject in the object lifting process. (D) The height gaps of the gripper and the number of trigger commands used in the grasping and lifting processes averaged over all the subjects.

Similar articles

Cited by

References

    1. Andreu-Perez J., Cao F., Hagras H., Yang G.-Z. (2017). A self-adaptive online brain machine interface of a humanoid robot through a general Type-2 fuzzy inference system. IEEE Trans. Fuzzy Syst. 1 10.1109/TFUZZ.2016.2637403 - DOI
    1. Antfolk C., D'Alonzo M., Rosen B., Lundborg G., Sebelius F., Cipriani C. (2013). Sensory feedback in upper limb prosthetics. Exp. Rev. Med. Dev. 10, 45–54. 10.1586/erd.12.68 - DOI - PubMed
    1. Bhagat N. A., Venkatakrishnan A., Abibullaev B., Artz E. J., Yozbatiran N., Blank A. A., et al. . (2016). Design and optimization of an EEG-based brain machine interface (BMI) to an upper-limb exoskeleton for stroke survivors. Front. Neurosci. 10:122. 10.3389/fnins.2016.00122 - DOI - PMC - PubMed
    1. Biddiss E. A., Chau T. T. (2007). Upper limb prosthesis use and abandonment: a survey of the last 25 years. Prosthet. Orthot. Int. 31, 236–257. 10.1080/03093640600994581 - DOI - PubMed
    1. Chaudhary U., Birbaumer N., Ramos-Murguialday A. (2016). Brain–computer interfaces for communication and rehabilitation. Nat. Rev. Neurol. 12, 513–525. 10.1038/nrneurol.2016.113 - DOI - PubMed