CoMed-SAM: Collaborative Medical SAM for Multi-Modality Image Segmentation
- Authors
- Kim, Minkyu; Ryu, Kanghyun; Han, Yoseob
- Issue Date
- 2025-10
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Citation
- IEEE Access, v.13, pp.184561 - 184573
- Abstract
- Medical image segmentation is a crucial component of disease diagnosis and treatment planning. The Segment Anything Model (SAM), which has recently gained prominence in natural image processing, exhibits remarkable zero-shot generalization performance. However, the SAM architecture is fundamentally limited to a single-modality input, which restricts its ability to leverage multi-modality medical images such as multi-contrast MRI. To address this limitation, in this study we introduce Collaborative Medical SAM (CoMed-SAM), an enhanced segmentation model designed to integrate multiple medical imaging modalities. CoMed-SAM incorporates two novel contributions for robust performance, even with a variable number of inputs: 1) an embedding fusion module that effectively merges features from multiple encoders, and 2) a dropout learning strategy that ensures generalization despite missing modalities. Experimental results on the IVDM3Seg dataset for lumbar intervertebral disc segmentation and the CHAOS dataset for abdominal organ segmentation demonstrate that CoMed-SAM significantly outperforms conventional SAM-based models. Notably, it also achieves superior segmentation accuracy in single-modality scenarios, highlighting its enhanced feature extraction capabilities. Furthermore, ablation studies confirm that the dropout learning strategy is critical, as models trained with this strategy consistently outperform those trained without it. The source code and our pretrained model are available at https://github.com/hunzo300/CoMed-SAM.git
- Keywords
- Image segmentation; Biomedical imaging; Adaptation models; Transformers; Feature extraction; Training; Magnetic resonance imaging; Accuracy; Robustness; Manuals; Medical image segmentation; multi-modality; deep learning; segment anything model
- URI
- https://pubs.kist.re.kr/handle/201004/153679
- DOI
- 10.1109/ACCESS.2025.3626037
- Appears in Collections:
- KIST Article > 2025
- Export
- RIS (EndNote)
- XLS (Excel)
- XML
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.