TY - GEN
T1 - Multi-Contrast MRI Image Translation via Pathology-Aware Generative Adversarial Networks
AU - Abdallah, Mohamed M.
AU - Rasmy, Mohamed Emad M.
AU - Rushdi, Muhammad A.
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Multi-contrast magnetic resonance imaging (MRI) sequences reflect different tissue characteristics, and thus such sequences are crucial for clinical and non-clinical applications. However, the acquisition of these MRI sequences is both financially and computationally expensive. In order to alleviate these expenses, cross-modality MRI image translation schemes have been proposed to transform an MRI image of one contrast (or modality) to another. Image translation has been primarily based on deep learning architectures, especially generative adversarial networks (GAN). Most of the GAN-based methods focus on matching voxel values and anatomical structures, irrespective of whether the region is normal or abnormal. In this paper, we present a pathology-aware GAN (Pa-GAN) architecture which exploits MRI contrast images and segmentation masks for pathological areas in order to explicitly differentiate between normal and pathological tissues in the multi-contrast MRI image translation process. This framework leads to the synthesis of MRI images with better anatomical details in the normal & abnormal regions. We employed edge-based, gradient-based, and mean-absolute-error (MAE) loss functions in order to ensure the structural integrity of the synthesized MRI images. Extensive experiments were conducted on the BraTS 2015 dataset for T1-weighted (T1W)-to-T2-weighted (T2W) MRI image translation. The proposed architecture achieved promising qualitative and quantitative results with average structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) metrics of 0.972 and 34.83 dB, respectively.
AB - Multi-contrast magnetic resonance imaging (MRI) sequences reflect different tissue characteristics, and thus such sequences are crucial for clinical and non-clinical applications. However, the acquisition of these MRI sequences is both financially and computationally expensive. In order to alleviate these expenses, cross-modality MRI image translation schemes have been proposed to transform an MRI image of one contrast (or modality) to another. Image translation has been primarily based on deep learning architectures, especially generative adversarial networks (GAN). Most of the GAN-based methods focus on matching voxel values and anatomical structures, irrespective of whether the region is normal or abnormal. In this paper, we present a pathology-aware GAN (Pa-GAN) architecture which exploits MRI contrast images and segmentation masks for pathological areas in order to explicitly differentiate between normal and pathological tissues in the multi-contrast MRI image translation process. This framework leads to the synthesis of MRI images with better anatomical details in the normal & abnormal regions. We employed edge-based, gradient-based, and mean-absolute-error (MAE) loss functions in order to ensure the structural integrity of the synthesized MRI images. Extensive experiments were conducted on the BraTS 2015 dataset for T1-weighted (T1W)-to-T2-weighted (T2W) MRI image translation. The proposed architecture achieved promising qualitative and quantitative results with average structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) metrics of 0.972 and 34.83 dB, respectively.
KW - cGAN
KW - Image-to-Image
KW - MRI Translation
UR - http://www.scopus.com/inward/record.url?scp=85177212354&partnerID=8YFLogxK
U2 - 10.1109/MLSP55844.2023.10285896
DO - 10.1109/MLSP55844.2023.10285896
M3 - Conference contribution
AN - SCOPUS:85177212354
T3 - IEEE International Workshop on Machine Learning for Signal Processing, MLSP
BT - Proceedings of the 2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing, MLSP 2023
A2 - Comminiello, Danilo
A2 - Scarpiniti, Michele
PB - IEEE Computer Society
T2 - 33rd IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2023
Y2 - 17 September 2023 through 20 September 2023
ER -