Impact of Physical Parameters and Vision Data on Deep Learning-Based Grip Force Estimation for Fluidic Origami Soft Grippers

Rho, EojinKim, WoongbaeMun, JungwookYu, Sung YolCho, Kyu-JinJo, Sungho
Issue Date
Institute of Electrical and Electronics Engineers Inc.
IEEE Robotics and Automation Letters, v.9, no.3, pp.2487 - 2494
Knowing the gripping force being applied to an object is important for improving the quality of the grip, as well as preventing surface damage or breakage of fragile objects. In the case of soft grippers, however, an attaching or embedding of force/pressure sensors can compromise their adaptability or constrain their design in scenarios involving significant deformation/deployment. In this paper, we present a vision-based neural network( OriGripNet ) that estimates gripping force by combining RGB image data with key parameters extracted from the physical features of a soft gripper. Real-world force data was collected using a reconfigurable test object with an embedded load cell while image data was collected by an RGB camera mounted on the wrist of a robotic arm. In addition, joint position information of the pneumatically driven origami gripper extracted from the images, and applied pressure were used for training of OriGripNet . OriGripNet showed a mean average error(MAE) of 0.0636 N when tested for untrained objects, yet some values exhibited errors exceeding 20%. Nevertheless, the results show that pressure, joint position, and image information have their own strengths in force estimation, contact estimation, and that they have a synergistic effect on the performance when combined.
Grippers; Force; Estimation; Sensors; Geometry; Robots; Deep learning; Soft sensors and actuators; deep learning in grasping and manipulation; force and tactile sensing
Appears in Collections:
KIST Article > 2024
Files in This Item:
There are no files associated with this item.
RIS (EndNote)
XLS (Excel)


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.