Full metadata record

DC Field Value Language
dc.contributor.authorKim, Minkyu-
dc.contributor.authorRyu, Kanghyun-
dc.contributor.authorHan, Yoseob-
dc.date.accessioned2025-11-26T10:03:30Z-
dc.date.available2025-11-26T10:03:30Z-
dc.date.created2025-11-26-
dc.date.issued2025-10-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/153679-
dc.description.abstractMedical image segmentation is a crucial component of disease diagnosis and treatment planning. The Segment Anything Model (SAM), which has recently gained prominence in natural image processing, exhibits remarkable zero-shot generalization performance. However, the SAM architecture is fundamentally limited to a single-modality input, which restricts its ability to leverage multi-modality medical images such as multi-contrast MRI. To address this limitation, in this study we introduce Collaborative Medical SAM (CoMed-SAM), an enhanced segmentation model designed to integrate multiple medical imaging modalities. CoMed-SAM incorporates two novel contributions for robust performance, even with a variable number of inputs: 1) an embedding fusion module that effectively merges features from multiple encoders, and 2) a dropout learning strategy that ensures generalization despite missing modalities. Experimental results on the IVDM3Seg dataset for lumbar intervertebral disc segmentation and the CHAOS dataset for abdominal organ segmentation demonstrate that CoMed-SAM significantly outperforms conventional SAM-based models. Notably, it also achieves superior segmentation accuracy in single-modality scenarios, highlighting its enhanced feature extraction capabilities. Furthermore, ablation studies confirm that the dropout learning strategy is critical, as models trained with this strategy consistently outperform those trained without it. The source code and our pretrained model are available at https://github.com/hunzo300/CoMed-SAM.git-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleCoMed-SAM: Collaborative Medical SAM for Multi-Modality Image Segmentation-
dc.typeArticle-
dc.identifier.doi10.1109/ACCESS.2025.3626037-
dc.description.journalClass1-
dc.identifier.bibliographicCitationIEEE Access, v.13, pp.184561 - 184573-
dc.citation.titleIEEE Access-
dc.citation.volume13-
dc.citation.startPage184561-
dc.citation.endPage184573-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.identifier.wosid001606658900003-
dc.identifier.scopusid2-s2.0-105020284795-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryTelecommunications-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaTelecommunications-
dc.type.docTypeArticle-
dc.subject.keywordAuthorImage segmentation-
dc.subject.keywordAuthorBiomedical imaging-
dc.subject.keywordAuthorAdaptation models-
dc.subject.keywordAuthorTransformers-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorTraining-
dc.subject.keywordAuthorMagnetic resonance imaging-
dc.subject.keywordAuthorAccuracy-
dc.subject.keywordAuthorRobustness-
dc.subject.keywordAuthorManuals-
dc.subject.keywordAuthorMedical image segmentation-
dc.subject.keywordAuthormulti-modality-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorsegment anything model-
Appears in Collections:
KIST Article > 2025
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE