Full metadata record

DC Field Value Language
dc.contributor.authorLee, Kyuheon-
dc.contributor.authorPark, Tae Young-
dc.contributor.authorMin, Byeong-Kyong-
dc.contributor.authorKim, Hyungmin-
dc.date.accessioned2024-12-03T06:00:22Z-
dc.date.available2024-12-03T06:00:22Z-
dc.date.created2024-11-25-
dc.date.issued2024-11-22-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/151270-
dc.description.abstractPurpose Focused ultrasound neuromodulation is gaining significant attention due to its ability to stimulate the brain with high resolution and non-invasively. Furthermore, when using a multi-element transducer, the acoustic focus can be steered without physically moving the device, enhancing the clinical experience. Despite these advantages, there are several limitations in the current methods for monitoring acoustic focus. First, when predicting the focus using only geometric information from the optical tracker, acoustic aberrations caused by the skull cannot be accounted for. Additionally, while numerical solvers can predict the acoustic focus, they are computationally intensive, making real-time clinical applications challenging. To overcome these issues, AI-based simulations for single-element transducers have been developed, leading to advancements in simulation-guided navigation. In this study, we propose an AI simulation for multi-element transducers. We expect this approach to enable real-time monitoring of the acoustically steered focus in response to phase changes in each element, ultimately enhancing clinical outcomes in transcranial focused ultrasound stimulation applications. Materials & Methods We used a four-element, annular-shaped transducer with a central frequency of 500 kHz. Ground truth data for AI model training was generated using the k-Wave Matlab toolbox. Hounsfield units (HU) from CT images were utilized to account for the acoustic properties of the skull. The transducer position remained fixed throughout the process. The phase combinations for the training data were randomly assigned and did not overlap with the test data. The AI simulation model was designed as a U-shaped encoder decoder network, integrating both CNN and Swin Transformer architectures. It takes the phase of each element, along with transducer’s geometric information and HU, as input to generate the simulation results. Results The accuracy of our AI model was assessed by comparing the peak pressure ratio and the dice similarity coefficient (DSC) for the full-width at half maximum (FWHM) between the AI predictions and the k-Wave simulation results. In free water, the average peak pressure error was 5.6%, and the average DSC was 88.94%. In the human skull, the average peak pressure error was 2%, and the average DSC was 93.3%. the k-Wave simulation took 8 seconds to complete, whereas the AI model achieved the inference in only 0.3 seconds. Conclusion We propose an AI model for real-time prediction of acoustic pressure distribution based on phase combinations. Numerical evaluations demonstrate that the model achieves results comparable to k-Wave simulations while significantly reducing processing time, thereby enabling real-time monitoring of the steered focal position.-
dc.languageEnglish-
dc.publisher대한치료초음파학회-
dc.titleReal-time Simulation of Phased Multi-element Transducer using AI-
dc.typeConference-
dc.description.journalClass2-
dc.identifier.bibliographicCitation대한치료초음파학회 제10차 정기학술대회-
dc.citation.title대한치료초음파학회 제10차 정기학술대회-
dc.citation.conferencePlaceKO-
dc.citation.conferencePlace서울 더케이호텔 별관-
dc.citation.conferenceDate2024-11-22-
dc.relation.isPartOf대한치료초음파학회 제10차 정기학술대회 초록집-
Appears in Collections:
KIST Conference Paper > 2024
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE