Reinforcement Learning Framework to Simulate Short-Term Learning Effects of Human Psychophysical Experiments Assessing the Quality of Artificial Vision
- Authors
- An, Na Min; Roh, Hyeonhee; Kim, Sein; Kim, Jae Hun; Im, Maesoon
- Issue Date
- 2023-06
- Publisher
- IEEE
- Citation
- International Joint Conference on Neural Networks (IJCNN)
- Abstract
- The quality of the artificial vision produced by visual prostheses has traditionally been evaluated by human psychophysical tests using images expressed with an array of phosphenes. However, such experiments involving human subjects are considerably time-consuming and labor-intensive. One potential solution may be implementing an efficient approach to assist or replace psychophysical experiments showing a short-term learning effect in human subjects. The present work developed a reinforcement learning (RL)-based feedback framework which built artificial agents to emulate the behavioral changes in the learning process of human subjects over the experimental trials. In our framework, we first trained an agent which can gradually learn to identify 720 faces presented in low-resolution phosphene images with feedback rewards received from the agreement in the perception of nine training human subjects. Then, in the automating stage of the framework, we created nine RL agents. By testing those agents, we found the RL agents mimicked the learning effects of nine test human subjects better than nine instances of a supervised learning (SL) model. Given the similar outcomes with human tests and the time efficiency of RL, our framework may expedite the development of visual prosthetic systems by at least partially replacing laborious human psychophysical experiments.
- ISSN
- 2161-4393
- URI
- https://pubs.kist.re.kr/handle/201004/76435
- DOI
- 10.1109/IJCNN54540.2023.10191870
- Appears in Collections:
- KIST Conference Paper > 2023
- Files in This Item:
There are no files associated with this item.
- Export
- RIS (EndNote)
- XLS (Excel)
- XML
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.