<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://pubs.kist.re.kr/handle/201004/153345">
    <title>DSpace Collection:</title>
    <link>https://pubs.kist.re.kr/handle/201004/153345</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://pubs.kist.re.kr/handle/201004/154397" />
        <rdf:li rdf:resource="https://pubs.kist.re.kr/handle/201004/154396" />
        <rdf:li rdf:resource="https://pubs.kist.re.kr/handle/201004/154395" />
        <rdf:li rdf:resource="https://pubs.kist.re.kr/handle/201004/154393" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-15T17:28:01Z</dc:date>
  </channel>
  <item rdf:about="https://pubs.kist.re.kr/handle/201004/154397">
    <title>Physics-based phenomenological characterization of cross-modal bias in multimodal models</title>
    <link>https://pubs.kist.re.kr/handle/201004/154397</link>
    <description>Title: Physics-based phenomenological characterization of cross-modal bias in multimodal models
Authors: Kim, Hyeongmo; Sohyun Kang; CHOI YERIN; Ji, Seung Yeon; Woo, Junhyuk; Chung, Hyunsuk; Soyeon Caren Han; Han, Kyung reem
Abstract: The term `algorithmic fairness&amp;apos; is used to evaluate whether AI models operate fairly in both comparative (where fairness is understood as formal equality, such as “treat like cases as like”) and non-comparative (where unfairness arises from the model’s inaccuracy, arbitrariness, or inscrutability) contexts. Recent advances in multimodal large language models (MLLMs) are breaking new ground in multimodal understanding, reasoning, and generation; however, we argue that inconspicuous distortions arising from complex multimodal interaction dynamics can lead to systematic bias. The purpose of this position paper is twofold: first, it is intended to acquaint AI researchers with phenomenological explainable approaches that rely on the physical entities that the machine experiences during training/inference, as opposed to the traditional cognitivist symbolic account or metaphysical approaches; second, it is to state that this phenomenological doctrine will be practically useful for tackling algorithmic fairness issues in MLLMs. We develop a surrogate physics-based model that describes transformer dynamics (i.e., semantic network structure and self-/cross-attention) to analyze the dynamics of cross-modal bias in MLLM, which are not fully captured by conventional embedding- or representation-level analyses. We support this position through multi-input diagnostic experiments: 1) perturbation-based analyses of emotion classification using Qwen2.5-Omni and Gemma 3n, and 2) dynamical analysis of Lorenz chaotic time-series prediction through the physical surrogate. Across two architecturally distinct MLLMs, we show that multimodal inputs can reinforce modality dominance rather than mitigate it, as revealed by structured error-attractor patterns under systematic label perturbation, complemented by dynamical analysis.</description>
    <dc:date>2026-01-25T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://pubs.kist.re.kr/handle/201004/154396">
    <title>Development of a Machine-Learning-Driven Microneedle Design Methodology for Biological Tissue Grippers</title>
    <link>https://pubs.kist.re.kr/handle/201004/154396</link>
    <description>Title: Development of a Machine-Learning-Driven Microneedle Design Methodology for Biological Tissue Grippers
Authors: 류제경; 김지엽; 박찬욱; 김해윤; 이득희; 한경원
Abstract: Microneedles offer promising capabilities not only for minimally invasive drug delivery but also as effective bio-tissue grippers. However, achieving strong tissue fixation while minimizing tissue damage during insertion remains a significant challenge. In this study, we propose a novel microneedle geometry optimized for 3D printing, designed to maximize the Pull-Out-to-Penetration Ratio through a machine-learning-based optimization framework combined with finite element analysis. Experimental results show that the optimized geometry achieves a six-fold improvement in the objective metric relative to conventional conical designs, demonstrating enhanced tissue fixation while simultaneously reducing insertion-induced damage. This approach highlights the potential for customizable, low-pain microneedle designs across a broad range of biomedical applications.</description>
    <dc:date>2026-02-05T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://pubs.kist.re.kr/handle/201004/154395">
    <title>Ultrasound Probe Calibration Using a Phantom Embedded with Randomly Positioned Spherical Objects</title>
    <link>https://pubs.kist.re.kr/handle/201004/154395</link>
    <description>Title: Ultrasound Probe Calibration Using a Phantom Embedded with Randomly Positioned Spherical Objects
Authors: 류제경; 노바 에카 디아나; 강규원; 박찬욱; 한경원; 이득희
Abstract: We present a rapid and accurate freehand ultrasound probe spatial calibration technique that uses a simple phantom easily fabricated in laboratory settings. The phantom incorporates randomly distributed spherical inclusions made from gelatin mixtures with different ultrasonic velocities and is CT-visible, allowing precise geometric reconstruction without requiring high-precision manufacturing. An optical tracking system mapped the phantom coordinates to the probe coordinates, and a synchronization algorithm corrected approximately 150 ms of system latency. Elliptical projections in ultrasound images were analyzed by extracting ellipse centers and axes to estimate calibration parameters using our new alignment algorithm. Calibration performed at 6, 9, and 12 cm depth demonstrated consistent accuracy, with three operators freely sweeping the probe across the phantom and achieving a mean error of 0.6656 mm. This method addresses common limitations in existing approaches—such as complex phantom fabrication, limited generalizability, and high sensitivity to deviations—and enables reliable real-time calibration across multiple depths with minimal error, supporting its potential for broad clinical and research adoption.</description>
    <dc:date>2026-02-10T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://pubs.kist.re.kr/handle/201004/154393">
    <title>CNN-based Input-Aware Gradient Channel Selection for Motor Imagery Classification</title>
    <link>https://pubs.kist.re.kr/handle/201004/154393</link>
    <description>Title: CNN-based Input-Aware Gradient Channel Selection for Motor Imagery Classification
Authors: Jeong, Ji Hyeok; Kim, Dong-Joo; Kim, Hyungmin
Abstract: A CNN-based interpretable channel selection framework is proposed to reduce data complexity in session-transfer motor imagery (MI) brain–computer interface (BCI) scenarios. Experiments conducted on the BCI Competition IV-2a dataset with nine healthy subjects demonstrate that the proposed Input-Aware Gradient approach effectively preserves classification performance even when the number of channels is reduced to 10 (p = 0.11). These findings indicate that the proposed framework can identify a compact, subject-specific channel subset while retaining session-invariant neural features, thereby enabling the development of efficient and practical BCI systems.</description>
    <dc:date>2026-02-05T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

