Full metadata record

DC Field Value Language
dc.contributor.authorRho, Eojin-
dc.contributor.authorKim, Woongbae-
dc.contributor.authorMun, Jungwook-
dc.contributor.authorYu, Sung Yol-
dc.contributor.authorCho, Kyu-Jin-
dc.contributor.authorJo, Sungho-
dc.date.accessioned2024-04-25T07:11:20Z-
dc.date.available2024-04-25T07:11:20Z-
dc.date.created2024-04-25-
dc.date.issued2024-03-
dc.identifier.issn2377-3766-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/149747-
dc.description.abstractKnowing the gripping force being applied to an object is important for improving the quality of the grip, as well as preventing surface damage or breakage of fragile objects. In the case of soft grippers, however, an attaching or embedding of force/pressure sensors can compromise their adaptability or constrain their design in scenarios involving significant deformation/deployment. In this paper, we present a vision-based neural network( OriGripNet ) that estimates gripping force by combining RGB image data with key parameters extracted from the physical features of a soft gripper. Real-world force data was collected using a reconfigurable test object with an embedded load cell while image data was collected by an RGB camera mounted on the wrist of a robotic arm. In addition, joint position information of the pneumatically driven origami gripper extracted from the images, and applied pressure were used for training of OriGripNet . OriGripNet showed a mean average error(MAE) of 0.0636 N when tested for untrained objects, yet some values exhibited errors exceeding 20%. Nevertheless, the results show that pressure, joint position, and image information have their own strengths in force estimation, contact estimation, and that they have a synergistic effect on the performance when combined.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleImpact of Physical Parameters and Vision Data on Deep Learning-Based Grip Force Estimation for Fluidic Origami Soft Grippers-
dc.typeArticle-
dc.identifier.doi10.1109/LRA.2024.3356979-
dc.description.journalClass1-
dc.identifier.bibliographicCitationIEEE Robotics and Automation Letters, v.9, no.3, pp.2487 - 2494-
dc.citation.titleIEEE Robotics and Automation Letters-
dc.citation.volume9-
dc.citation.number3-
dc.citation.startPage2487-
dc.citation.endPage2494-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.identifier.wosid001167534400014-
dc.identifier.scopusid2-s2.0-85183661190-
dc.relation.journalWebOfScienceCategoryRobotics-
dc.relation.journalResearchAreaRobotics-
dc.type.docTypeArticle-
dc.subject.keywordAuthorGrippers-
dc.subject.keywordAuthorForce-
dc.subject.keywordAuthorEstimation-
dc.subject.keywordAuthorSensors-
dc.subject.keywordAuthorGeometry-
dc.subject.keywordAuthorRobots-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorSoft sensors and actuators-
dc.subject.keywordAuthordeep learning in grasping and manipulation-
dc.subject.keywordAuthorforce and tactile sensing-
Appears in Collections:
KIST Article > 2024
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE