Computed tomography vertebral segmentation from multi-vendor scanner data

Authors
Kim, ChaewooBekar, OguzcanSeo, HyunseokPark, Sang-MinLee, Deukhee
Issue Date
2022-10
Publisher
한국CDE학회
Citation
Journal of Computational Design and Engineering, v.9, no.5, pp.1650 - 1664
Abstract
Automatic medical image segmentation is a crucial procedure for computer-assisted surgery. Especially, three-dimensional reconstruction of medical images of the surgical targets can be accurate in fine anatomical structures with optimal image segmentation, thus leading to successful surgical results. However, the performance of the automatic segmentation algorithm highly depends on the consistent properties of medical images. To address this issue, we propose a model for standardizing computed tomography (CT) images. Hence, our CT image-to-image translation network enables diverse CT images (non-standard images) to be translated to images with identical features (standard images) for the more precise performance of U-Net segmentation. Specifically, we combine an image-to-image translation network with a generative adversarial network, consisting of a residual block-based generative network and the discriminative network. Also, we utilize the feature extracting layers of VGG-16 to extract the style of the standard image and the content of the non-standard image. Moreover, for precise diagnosis and surgery, the conservation of anatomical information of the non-standard image is also essential during the synthesis of medical images. Therefore, for performance evaluation, largely three evaluation methods are employed: (i) visualization of the geometrical matching between the non-standard (content) and synthesized images to verify the maintenance of the anatomical structures; (ii) measuring numerical results using image similarity evaluation metrics; and (iii) assessing the performance of U-Net segmentation with our synthesized images. Specifically, we investigate that our model network can transfer the texture from standard CT images to diverse CT images (non-standard) scanned by different scanners and scan protocols. Also, we verify that the synthesized images can retain the global pose and fine structures of the non-standard images. We also compare the predicted segmentation result of the non-standard image and the synthesized image generated from its non-standard image via our proposed network. In addition, the performance of our proposed model is compared with the windowing process, where the window parameter of the standard image is applied to the non-standard image to ensure that our model outperforms the windowing process.
Keywords
computed tomography image; style transfer; super-resolution; generative adversarial networks; vertebral segmentation; image-to-image translation; U-Net
ISSN
2288-4300
URI
https://pubs.kist.re.kr/handle/201004/114501
DOI
10.1093/jcde/qwac072
Appears in Collections:
KIST Article > 2022
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE