Reinforcement Learning Heuristic A

Authors
Ha, JunhyoungAn, ByungchulKim, Soon kyum
Issue Date
2023-03
Publisher
Institute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Industrial Informatics, v.19, no.3, pp.2307 - 2316
Abstract
In a graph search algorithm, a given environment is represented as a graph comprising a set of feasible system configurations and their neighboring connections. A path is generated by connecting the initial and goal configurations through graph exploration, whereby the path is often desired to be optimal or suboptimal. The computational performance of the optimal path generation depends on the avoidance of unnecessary explorations. Accordingly, heuristic functions have been widely adopted to guide the exploration efficiently by providing estimated costs to the goal configurations. The exploration is efficient when the heuristic functions estimate the optimal cost closely which remains challenging because it requires a comprehensive understanding of the environment. However, this challenge presents the scope to improve the computational efficiency over the existing methods. Herein, we propose Reinforcement Learning Heuristic A* (RLHA*), which adopts an artificial neural network as a learning heuristic function to closely estimate the optimal cost, while achieving a bounded suboptimal path. Instead of being trained by pre-computed paths, the learning heuristic function keeps improving by using self-generated paths. Numerous simulations were performed to demonstrate the consistent and robust performance of RLHA* by comparing it with existing methods. IEEE
Keywords
Costs; Graph Search; Heuristic algorithms; Path planning; Path Planning; Planning; Reinforcement learning; Reinforcement Learning; Robots; Signal processing algorithms
ISSN
1551-3203
URI
https://pubs.kist.re.kr/handle/201004/75792
DOI
10.1109/TII.2022.3188359
Appears in Collections:
KIST Article > 2023
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE