Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Rho, Eojin | - |
dc.contributor.author | Kim, Woongbae | - |
dc.contributor.author | Mun, Jungwook | - |
dc.contributor.author | Yu, Sung Yol | - |
dc.contributor.author | Cho, Kyu-Jin | - |
dc.contributor.author | Jo, Sungho | - |
dc.date.accessioned | 2024-04-25T07:11:20Z | - |
dc.date.available | 2024-04-25T07:11:20Z | - |
dc.date.created | 2024-04-25 | - |
dc.date.issued | 2024-03 | - |
dc.identifier.issn | 2377-3766 | - |
dc.identifier.uri | https://pubs.kist.re.kr/handle/201004/149747 | - |
dc.description.abstract | Knowing the gripping force being applied to an object is important for improving the quality of the grip, as well as preventing surface damage or breakage of fragile objects. In the case of soft grippers, however, an attaching or embedding of force/pressure sensors can compromise their adaptability or constrain their design in scenarios involving significant deformation/deployment. In this paper, we present a vision-based neural network( OriGripNet ) that estimates gripping force by combining RGB image data with key parameters extracted from the physical features of a soft gripper. Real-world force data was collected using a reconfigurable test object with an embedded load cell while image data was collected by an RGB camera mounted on the wrist of a robotic arm. In addition, joint position information of the pneumatically driven origami gripper extracted from the images, and applied pressure were used for training of OriGripNet . OriGripNet showed a mean average error(MAE) of 0.0636 N when tested for untrained objects, yet some values exhibited errors exceeding 20%. Nevertheless, the results show that pressure, joint position, and image information have their own strengths in force estimation, contact estimation, and that they have a synergistic effect on the performance when combined. | - |
dc.language | English | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Impact of Physical Parameters and Vision Data on Deep Learning-Based Grip Force Estimation for Fluidic Origami Soft Grippers | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/LRA.2024.3356979 | - |
dc.description.journalClass | 1 | - |
dc.identifier.bibliographicCitation | IEEE Robotics and Automation Letters, v.9, no.3, pp.2487 - 2494 | - |
dc.citation.title | IEEE Robotics and Automation Letters | - |
dc.citation.volume | 9 | - |
dc.citation.number | 3 | - |
dc.citation.startPage | 2487 | - |
dc.citation.endPage | 2494 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.identifier.wosid | 001167534400014 | - |
dc.identifier.scopusid | 2-s2.0-85183661190 | - |
dc.relation.journalWebOfScienceCategory | Robotics | - |
dc.relation.journalResearchArea | Robotics | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | Grippers | - |
dc.subject.keywordAuthor | Force | - |
dc.subject.keywordAuthor | Estimation | - |
dc.subject.keywordAuthor | Sensors | - |
dc.subject.keywordAuthor | Geometry | - |
dc.subject.keywordAuthor | Robots | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | Soft sensors and actuators | - |
dc.subject.keywordAuthor | deep learning in grasping and manipulation | - |
dc.subject.keywordAuthor | force and tactile sensing | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.