Proposition de stage - 2023
Multimodal image segmentation for computer-aided radiotherapy
Niveau : Master / Ingénieur
Période : 6 months
- ords: Deep learning, medical image analysis, multimodal imaging, radiotherapy
- Supervisors: firstname.lastname@example.org
- Team and Lab: SIMS team, LS2N Lab, Nantes, France
- Starting date: Jan-2023 (flexible) ~6 months
In the context of image-guided radiotherapy, this project will focus on the automatic segmentation of multiple organs from Magnetic Resonance MR (T1, T2) and Computer Tomography images. We will target two anatomies: the prostate and the brain, as well as their respective surrounding Organs At Risk (OARs). The reason to consider multiple modalities (MR and CT) is that MR images allow for better visualisation of soft tissues, while CT are required for radiotherapy dose computation. Therefore, segmentation from multimodal images (here MR and CT) is necessary for clinical assessment, diagnosis and treatment planning [Ackaouy20, Ouyang19]. Extensive literature has shown the effectiveness of convolutional neural networks in segmenting multiple organs[Li21, Painchaud20]. Yet, without proper adaptation, these models fail when deployed across modalities, new populations or different clinical sites, mainly due to a domain shift. Thus, designing models that perform well across domains is critical, as labels are scarce and expensive.
Two internships are proposed, each working on one of the two target anatomies.
The first internship will be dedicated to the prostate dataset. The focus will be to learn the organ’s shape distribution from MR images (single sequence) and then use these learned shapes to help segment CT images with less visible contours. We will start by studying our prior work on Optimal Latent Vector Alignment OLVA [Chanti21] for segmentation under unsupervised or weakly supervised domain adaptation (DA). Then we will explore how to adapt this work or similar techniques to the prostate dataset.
The second internship will address the brain dataset and focus on two aspects: the multi-sequence nature of brain images (multiple MR images for a single brain) and partial annotations (not all organs are annotated for every patient). The intern will study existing methods for handling inhomogeneous or incomplete annotations and multi-target domain adaption [Saporta21], and integrate them into OLVA or other UDA methods for segmentation.
The two internships will be part of the CEMMTAUR Comminlabs project.
[AlChanti21a] D. Al Chanti and D. Mateus. Optimal Latent Vector Alignment for Unsupervised Domain Adaptation in Medical Image Segmentation, Int. Conf. on Medical Image Comp. and Computer-Assisted Interventions, MICCAI 2021
[Ackaouy20] Ackaouy A, et aL., F.: Unsupervised domain adaptation with optimal transport in multi-site segmentation of multiple sclerosis lesions from mri data. Frontiers in computational neuroscience 14, 19 (2020).
[Gonzalez20] V. Gonzalez Duque, D. Al Chanti, M. Crouzier, A. Nordez, L. Lacourpaille, and D. Mateus. Spatio-temporal consistency and negative label transfer for 3d freehand us segmentation. Int. Conf. on Medical Image Comp. and Computer-Assisted Interventions, MICCAI 2020.
[Islam2021] Islam M, Glocker B. Spatially varying label smoothing: Capturing uncertainty from expert annotations. In International Conference on Information Processing in Medical Imaging 2021 Jun 28 (pp. 677-688). Springer, Cham
[Ouyang19] Ouyang C, et al. Data efficient unsupervised domain adaptation for cross-modality image segmentation, MICCAI 2019.
[Saporta21] A. Saporta et.al. Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation. ICCV 2021.
[Zhang21] Y. Zhang et. al. Deep multimodal fusion for semantic image segmentation: A survey. Image and Vision Computing, 2021 Elsevier, 10.1016/j.imavis.2020.104042 . hal-02963619