Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Feb 12:18:1291694.
doi: 10.3389/fnbot.2024.1291694. eCollection 2024.

Human-robot planar co-manipulation of extended objects: data-driven models and control from human-human dyads

Affiliations

Human-robot planar co-manipulation of extended objects: data-driven models and control from human-human dyads

Erich Mielke et al. Front Neurorobot. .

Abstract

Human teams are able to easily perform collaborative manipulation tasks. However, simultaneously manipulating a large extended object for a robot and human is a difficult task due to the inherent ambiguity in the desired motion. Our approach in this paper is to leverage data from human-human dyad experiments to determine motion intent for a physical human-robot co-manipulation task. We do this by showing that the human-human dyad data exhibits distinct torque triggers for a lateral movement. As an alternative intent estimation method, we also develop a deep neural network based on motion data from human-human trials to predict future trajectories based on past object motion. We then show how force and motion data can be used to determine robot control in a human-robot dyad. Finally, we compare human-human dyad performance to the performance of two controllers that we developed for human-robot co-manipulation. We evaluate these controllers in three-degree-of-freedom planar motion where determining if the task involves rotation or translation is ambiguous.

Keywords: cooperative manipulation; force control; human-robot interaction; learning and adaptive systems; neural network; physical human-robot interaction; variable impedance.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Left: A leader and a blindfolded follower performing a table-carrying task. Right: Rethink Robotics Baxter robot mounted on a holonomic base carrying the table with a person.
Figure 2
Figure 2
Anatomical direction reference with corresponding table axis: X is anterior, Y is lateral, and Z is superior.
Figure 3
Figure 3
Examples of the simple planar translation and rotation task executed by each H-H dyad and emulated by the human-robot dyads in this paper. Used with permission (Jensen et al., 2021). (A) H-H translation task. (B) H-H rotation task.
Figure 4
Figure 4
First 4 seconds of trials showing torque trends for rotation and translation tasks for both directions of motion: dashed lines are individual trials, bold lines are averages over all types of trials. (A) z-axis torque patterns. (B) x-axis torque patterns.
Figure 5
Figure 5
Plot showing lateral velocity profile for the beginning of Task 5, a 3D complex task avoiding obstacles: this portion of the task includes a lateral translation for over two meters.
Figure 6
Figure 6
Control loops for co-manipulation of an extended object showing human (in green box) communicating intent haptically through force sensor, then desired velocity is calculated using the specified control law and sent to velocity controller. (A) Control loop for BMVIC. (B) Control loop for EVIC.
Figure 7
Figure 7
Basic control loop structure of intent estimation in co-manipulation. The human moves the co-manipulated object, and the motion of the object, x, is fed into an intent estimator, which determines a desired motion of the robot, xd. The commanded robot's motion, xr, and resulting actual motion xa, then influences the object motion, as well as influencing the human leader. For the network, time-series motion data (Left), which are the inputs, are sent through a fully connected layer, a ReLU layer, an LSTM Cell RNN, and another fully connected layer before predicted velocities are given as outputs (Right).
Figure 8
Figure 8
Neural network prediction explanation. Previous time steps (shown in red) are used to obtain one future prediction of states (shown in green). This state is then appended to previous time steps, the first time step is removed, and the network is run again in order to achieve multiple future predictions.
Figure 9
Figure 9
Validation of neural network for a lateral translation task, thin lines are actual velocities and bold lines are predictions for future time steps.
Figure 10
Figure 10
Representation of the ambiguity of a translation task (moving from the top to bottom left) and a rotation task (rotating from the top to bottom right), where Agent R represents a robot, and Agent H represents a human. Agent R will, at least initially, “sense” the same signal or force due to the extent of the object immediately after Agent H initiates movement to either of the final positions.
Figure 11
Figure 11
Undershooting behavior of a human-robot dyad for a translation task, where bold, vertical lines indicate start and stop points, and dashed vertical line indicates the 90% completion point. Movement after this point is considered a fine motor adjustment.

References

    1. Al-Saadi Z., Hamad Y. M., Aydin Y., Kucukyilmaz A., Basdogan C. (2023). “Resolving conflicts during human-robot co-manipulation,” in Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (Stockholm: ACM; IEEE; ), 243–251.
    1. Al-Saadi Z., Sirintuna D., Kucukyilmaz A., Basdogan C. (2020). A novel haptic feature set for the classification of interactive motor behaviors in collaborative object transfer. IEEE Trans. Haptics 14, 384–395. 10.1109/TOH.2020.3034244 - DOI - PubMed
    1. Aydin Y., Tokatli O., Patoglu V., Basdogan C. (2020). A computational multicriteria optimization approach to controller design for physical human-robot interaction. IEEE Trans, Robot. 36, 1791–1804. 10.1109/TRO.2020.2998606 - DOI
    1. Basdogan C., Ho C., Srinivasan M. A., Slater M. E. L. (2001). An experimental study on the role of touch in shared virtual environments. ACM Trans. Comput.-Hum. Interact. 7, 443–460. 10.1145/365058.365082 - DOI
    1. Berger E., Vogt D., Haji-Ghassemi N., Jung B., Amor H. B. (2015). “Inferring guidance information in cooperative human-robot tasks,” in IEEE-RAS International Conference on Humanoid Robots (Atlanta, GA: IEEE; ),124–129.