Abstract
Magnetic resonance imaging (MRI) is an important non-invasive clinical tool that can produce high-resolution and reproducible images. However, a long scanning time is required for high-quality MR images, which leads to exhaustion and discomfort of patients, inducing more artefacts due to voluntary movements of the patients and involuntary physiological movements. To accelerate the scanning process, methods by k-space undersampling and deep learning based reconstruction have been popularised. This work introduced SwinMR, a novel Swin transformer based method for fast MRI reconstruction. The whole network consisted of an input module (IM), a feature extraction module (FEM) and an output module (OM). The IM and OM were 2D convolutional layers and the FEM was composed of a cascaded of residual Swin transformer blocks (RSTBs) and 2D convolutional layers. The RSTB consisted of a series of Swin transformer layers (STLs). The shifted windows multi-head self-attention (W-MSA/SW-MSA) of STL was performed in shifted windows rather than the multi-head self-attention (MSA) of the original transformer in the whole image space. A novel multi-channel loss was proposed by using the sensitivity maps, which was proved to reserve more textures and details. We performed a series of comparative studies and ablation studies in the Calgary-Campinas public brain MR dataset and conducted a downstream segmentation experiment in the Multi-modal Brain Tumour Segmentation Challenge 2017 dataset. The results demonstrate our SwinMR achieved high-quality reconstruction compared with other benchmark methods, and it shows great robustness with different undersampling masks, under noise interruption and on different datasets. The code is publicly available at https://github.com/ayanglab/SwinMR.
Original language | English |
---|---|
Pages (from-to) | 281-304 |
Number of pages | 24 |
Journal | Neurocomputing |
Volume | 493 |
DOIs | |
Publication status | Published - 7 Jul 2022 |
Keywords
- MRI reconstruction
- Transformer
- Compressed sensing
- Parallel imaging
Project and Funding Information
- Project ID
- info:eu-repo/grantAgreement/EC/H2020/101005122/EU/The RapiD and SecuRe AI enhAnced DiaGnosis, Precision Medicine and Patient EmpOwerment Centered Decision Support System for Coronavirus PaNdemics/DRAGON
- info:eu-repo/grantAgreement/EC/H2020/952172/EU/Accelerating the lab to market transition of AI tools for cancer Management/CHAIMELEON
- Funding Info
- This work was supported in part by the UK Research and Inno-_x000D_ vation Future Leaders Fellowship [MR/V023799/1], in part by the_x000D_ Medical Research Council [MC/PC/21013], in part by the European_x000D_ Research Council Innovative Medicines Initiative [DRAGON, H2020-JTI-IMI2 101005122], in part by the AI for Health Imaging_x000D_ Award [CHAIMELEON, H2020-SC1-FA-DTS-2019-1 952172], in part_x000D_ by the British Heart Foundation [Project Number: TG/18/5/34111,_x000D_ PG/16/78/32402], in part by the NVIDIA Academic Hardware Grant_x000D_ Program, in part by the Project of Shenzhen International Cooper-_x000D_ ation Foundation [GJHZ20180926165402083], in part by the Bas-_x000D_ que Government through the ELKARTEK funding program [KK-_x000D_ 2020/00049], and in part by the consolidated research group_x000D_ MATHMODE [IT1294-19]