An important aim of research in medical imaging is the development of computer aided diagnosis (CAD) systems. A fundamental step in these systems is the image segmentation and convolutional neural networks (CNNs) are becoming the most commonly used approach to solve this task. However, despite their great power, in this domain CNNs are limited in their potential performance by the usually small amount of data [1]. Computed tomography (CT) and magnetic resonance imaging (MRI) scans are often used to examine the internal structure of human body and have their own unique properties and limitations. As a common practice, the investigations are usually done on a single modality, nonetheless, the simultaneous analysis of multiple modalities can significantly boost the segmentation accuracy. However, obtaining multiple imaging modalities for the same subject is very unlikely. In this paper we investigate the possibility of generating a multimodal CT-MRI representation for a segmentation task starting from a single modality, either CT or MRI. We considered this as a missing data problem, hence, we designed a pipeline where a CycleGAN was used to generate the missing modality. The synthetic modality was then paired with the real one to perform the required segmentation taking advantage of the multimodal representation and the augmented training dataset. To test the system we used two unrelated labeled datasets, one with CT data and the other one with MRI data. Results show that data enrichment with synthetic modalities improves the segmentation performance.
Multimodal Segmentation of Medical Images with Heavily Missing Data
Sona, Diego
2021-01-01
Abstract
An important aim of research in medical imaging is the development of computer aided diagnosis (CAD) systems. A fundamental step in these systems is the image segmentation and convolutional neural networks (CNNs) are becoming the most commonly used approach to solve this task. However, despite their great power, in this domain CNNs are limited in their potential performance by the usually small amount of data [1]. Computed tomography (CT) and magnetic resonance imaging (MRI) scans are often used to examine the internal structure of human body and have their own unique properties and limitations. As a common practice, the investigations are usually done on a single modality, nonetheless, the simultaneous analysis of multiple modalities can significantly boost the segmentation accuracy. However, obtaining multiple imaging modalities for the same subject is very unlikely. In this paper we investigate the possibility of generating a multimodal CT-MRI representation for a segmentation task starting from a single modality, either CT or MRI. We considered this as a missing data problem, hence, we designed a pipeline where a CycleGAN was used to generate the missing modality. The synthetic modality was then paired with the real one to perform the required segmentation taking advantage of the multimodal representation and the augmented training dataset. To test the system we used two unrelated labeled datasets, one with CT data and the other one with MRI data. Results show that data enrichment with synthetic modalities improves the segmentation performance.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.