Full metadata record

DC Field Value Language
dc.contributor.authorHa, Junhyoung-
dc.contributor.authorAn, Byungchul-
dc.contributor.authorKim, Soon kyum-
dc.date.accessioned2024-01-12T02:31:59Z-
dc.date.available2024-01-12T02:31:59Z-
dc.date.created2022-09-25-
dc.date.issued2023-03-
dc.identifier.issn1551-3203-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/75792-
dc.description.abstractIn a graph search algorithm, a given environment is represented as a graph comprising a set of feasible system configurations and their neighboring connections. A path is generated by connecting the initial and goal configurations through graph exploration, whereby the path is often desired to be optimal or suboptimal. The computational performance of the optimal path generation depends on the avoidance of unnecessary explorations. Accordingly, heuristic functions have been widely adopted to guide the exploration efficiently by providing estimated costs to the goal configurations. The exploration is efficient when the heuristic functions estimate the optimal cost closely which remains challenging because it requires a comprehensive understanding of the environment. However, this challenge presents the scope to improve the computational efficiency over the existing methods. Herein, we propose Reinforcement Learning Heuristic A* (RLHA*), which adopts an artificial neural network as a learning heuristic function to closely estimate the optimal cost, while achieving a bounded suboptimal path. Instead of being trained by pre-computed paths, the learning heuristic function keeps improving by using self-generated paths. Numerous simulations were performed to demonstrate the consistent and robust performance of RLHA* by comparing it with existing methods. IEEE-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.titleReinforcement Learning Heuristic A-
dc.typeArticle-
dc.identifier.doi10.1109/TII.2022.3188359-
dc.description.journalClass1-
dc.identifier.bibliographicCitationIEEE Transactions on Industrial Informatics, v.19, no.3, pp.2307 - 2316-
dc.citation.titleIEEE Transactions on Industrial Informatics-
dc.citation.volume19-
dc.citation.number3-
dc.citation.startPage2307-
dc.citation.endPage2316-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.identifier.wosid000967277300001-
dc.identifier.scopusid2-s2.0-85134225033-
dc.relation.journalWebOfScienceCategoryAutomation & Control Systems-
dc.relation.journalWebOfScienceCategoryComputer Science, Interdisciplinary Applications-
dc.relation.journalWebOfScienceCategoryEngineering, Industrial-
dc.relation.journalResearchAreaAutomation & Control Systems-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.type.docTypeArticle-
dc.subject.keywordAuthorCosts-
dc.subject.keywordAuthorGraph Search-
dc.subject.keywordAuthorHeuristic algorithms-
dc.subject.keywordAuthorPath planning-
dc.subject.keywordAuthorPath Planning-
dc.subject.keywordAuthorPlanning-
dc.subject.keywordAuthorReinforcement learning-
dc.subject.keywordAuthorReinforcement Learning-
dc.subject.keywordAuthorRobots-
dc.subject.keywordAuthorSignal processing algorithms-
Appears in Collections:
KIST Article > 2023
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE