Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2010 Aug 23:7:42.
doi: 10.1186/1743-0003-7-42.

Cognitive vision system for control of dexterous prosthetic hands: experimental evaluation

Affiliations

Cognitive vision system for control of dexterous prosthetic hands: experimental evaluation

Strahinja Dosen et al. J Neuroeng Rehabil. .

Abstract

Background: Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands.

Methods: The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances.

Results: The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only).

Conclusions: The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic control eases the burden from the user and, as a result, the user can concentrate on what he/she does, not on how he/she should do it. The tests showed that the performance of the controller was satisfactory and that the users were able to operate the system with minimal prior training.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Control system architecture. The Cognitive Vision System (CVS) is integrated into a hierarchical control system for the control of a dexterous prosthetic hand (emulated by the CyberHand prototype). The user triggers the system and controls the orientation of the hand. A high-level controller autonomously selects the grasp type and size that are appropriate for the target object. A low-level controller embedded into the hand provides a stable interface for preshaping and grasping.
Figure 2
Figure 2
The implementation of the control system architecture. The hardware comprises: 1) the cognitive vision system (CVS), 2) a two-channel EMG system, and 3) a PC with a data acquisition card. The PC runs a control application implementing a finite state machine that triggers the following modules (gray boxes): the myoelectric control module, the CVS algorithm and the hand control module. The myoelectric module acquires and processes the EMG, generating a two-bit code signalling the activity of the flexor and extensor muscles. This code is the input for the state machine. The CVS algorithm estimates the size of the target object and uses a set of simple IF-THEN rules to select the grasp type and aperture size appropriate to grasp the object. The hand control module implements the selected grasp parameters by sending the commands to the embedded hand controller (HLHC) via an RS232 link.
Figure 3
Figure 3
Experimental platform. The platform consists of: 1) the CyberHand attached onto an orthopaedic splint, 2) the cognitive vision system (CVS) mounted onto the dorsal side of the hand via a pivot joint, and 3) the EMG electrodes for myoelectric control.
Figure 4
Figure 4
A decision tree depicting the IF-THEN rules for the selection of the grasp type and size. The inputs for the rules are the estimated lengths of the object's short (S) and long (L) axes. The lengths are compared against fixed thresholds (T) by following decision nodes (diamond shapes) of the tree until one of the leaf nodes (rounded rectangles) is reached. The thresholds are defined relative to the hand size and the size of the maximal aperture when the hand is preshaped according to a given grasp type. For example, TLARGE = 90% PW, TTHIN = 70% MLA, TWIDE = 50% MPA, and TVERYWIDE = 65% MPA, where PW is the width of the palm (from lateral to medial side), while MPA and MLA are the maximal aperture sizes for the palmar and lateral grasps, respectively. For the full set of rules see the additional file 1.
Figure 5
Figure 5
The representative outputs of the cognitive vision algorithm. The images depict the detected target object (see Table 2), measured distance (D), estimated lengths of its short (S) and long (L) axes and estimated grasp type and aperture size. The actual object sizes are given above the images. The estimated object axes are also shown graphically (superimposed gray lines). The bright spot is the reflection of the laser beam. The figure demonstrates that the cognitive vision system estimates the grasp types and sizes that are appropriate for the size of the target object. (Notations: Bidigit ~2-digit grasp, Tridigit ~3-digit grasp)
Figure 6
Figure 6
Finite state machine for the control of the artificial hand. The control is realized as an integration of the cognitive vision system (CVS) with myoelectric control. The two channels of electromyography (EMG) recorded from finger extensors (Ext EMG) and flexors (Flex EMG) drive the system through the states by providing a two-bit binary code (in brackets); the first bit signals the activity of the flexors and the second is for the extensors, while X means "don't care." The user aims the system toward a target object and triggers the hand opening. The CVS estimates the grasp type and size. The user reaches for the object, commands the hand to close, manipulates the object and finally commands the hand to open and release the object. Notations: rounded rectangles - states; full black circle - entry state; arrows - state transitions with events.
Figure 7
Figure 7
Experimental workspace. The notations are: IP - initial position for the hand; A1, A2 - initial positions for the object to be grasped; B1, B2 - target locations for the object placed at A1 and A2, respectively. The task for the subject was to reach for an object, grasp it, transport it to the target location and release it. Two sequences were used depending on the initial position of the object: IP-A1-B1 and IP-A2-B2.
Figure 8
Figure 8
Overall estimation accuracy for the grasp type and size. Both grasp type and size were correctly estimated in 84% of the cases. In 3% of the cases, the type was correct and the size was larger than the correct one. We had the same number of cases (3%) in which the grasp was wrong but still similar enough for the subject to accomplish the task. Therefore, from the functional point of view, the classification was successful in 90% of the cases (all gray slices).
Figure 9
Figure 9
Classification accuracy for different number of possible outputs. If the number of possible outputs (i.e., hand preshape commands) that the IF-THEN rules can generate is decreased, the success rate improves. Groups: 1 - all grasp types and sizes, 2 - two grasp sizes for the lateral and palmar grasps and one grasp size for the 3-digit and 2-digit grasps; 3 - only grasp types (i.e., one grasp size for all grasp types).
Figure 10
Figure 10
Improvements in performance due to learning. The figure shows the results (time spent to accomplish the task) organized as a) individual trials and b) blocks of trails. The vertical axis is the time needed to accomplish the task. In plot a), the trend obtained by fitting a cubic polynomial through the experimental results (black dots) is shown by a continuous line, and the boundaries between the blocks of trials are depicted by the dashed vertical lines. In plot b), the horizontal lines are the medians, boxes show inter-quartile ranges and whiskers are minimal and maximal values. Statistically significant difference is denoted by a star. The time needed to successfully accomplish the task decreases steadily during the experiment.

Similar articles

Cited by

References

    1. Motion control hand. http://www.utaharm.com/etd.php
    1. Upper extremity prosthetics. http://www.ottobockus.com/cps/rde/xchg/ob_us_en/hs.xsl/5057.html
    1. Cipriani C, Controzzi M, Carrozza MC. Objectives, criteria and methods for the design of the SmartHand transradial prosthesis. Robotica. 2009. - PMC - PubMed
    1. Cipriani C, Controzzi M, Carrozza MC. Progress towards the development of the SmartHand transradial prosthesis. Proc Int Conf Rehabil Robot, ICORR, Jun 23-26, 2009; Kyoto, Japan. 2009. pp. 682–687.
    1. Huang H, Jiang L, Liu Y, Hou L, Cai H, Liu H. The Mechanical Design and Experiments of HIT/DLR Prosthetic Hand. Proc IEEE Int Conf Robot Biomim, ROBIO. 2006. pp. 896–901.

Publication types