Online Gadgetron Reconstruction of 3D Magnetic Resonance Fingerprinting via a GPU-Accelerated Azure Kubernetes Cluster
Andrew Dupuis1, Yong Chen1, Rasim Boyacioglu1, John Stairs2, Michael Hansen2, Kelvin Chow3, and Mark A Griswold1
1Case Western Reserve University, Cleveland, OH, United States, 2Microsoft, Redmond, WA, United States, 3Siemens Medical Solutions USA, Inc., Chicago, IL, United States

Synopsis

Reconstruction of 3D Magnetic Resonance Fingerprinting acquisitions is computationally demanding, resulting in long processing times. GPU parallelization of the reconstruction’s NUFFT, pattern matching, and coil combination steps improves performance, but traditionally requires high-performance computers at the scanner. We propose an online reconstruction on a remote GPU-accelerated Kubernetes cluster. This allows many scanners or sites to share easily upgradeable and manageable computing resources. Additional calibration measurements, such as B1 maps, can also be transferred to allow inline B1 inhomogeneity correction. We also demonstrate that 3D-MRF reconstruction is robust with raw data compression that can be used to reduce site-to-cloud bandwidth requirements.

Purpose

Without adequate parallelization, 3D-MRF image reconstruction is currently too computationally intensive to be useful as a routine clinical tool. GPU implementations of the 3D-MRF reconstruction’s NUFFT, pattern matching, and coil combination steps substantially improve this performance deficit [1], but require high-performance computers with substantial amounts of GPU memory. Dedicated computational resources are substantial investments with limited lifespans that quickly become cost-prohibitive to install and maintain across a healthcare system.

To resolve these speed and logistical concerns, we have implemented a GPU-accelerated Gadgetron reconstruction [2,3,4] that is capable of running remotely within a Kubernetes cluster hosted on Azure [5,6]. Kubernetes enables container-based deployment of Gadgetron as a managed appliance [8], while the cluster architecture allows for rapid resource scaling and load balancing, with scanners at multiple sites sharing a reconstruction “service” endpoint. This approach enables us to scale our 3D-MRF research to multiple sites without additional hardware deployments. Our implementation also supports B1 mapping for online B1 correction, and demonstrates that 3D-MRF raw data can be compressed for use on low-bandwidth networks without introducing additional lag or reducing image quality.

Methods

An ISMRM/NIST system phantom and a healthy volunteer (45 year old, male) were imaged using a prototype FISP 3D-MRF sequence with a 250x250x120mm3 field of view, a 1x1x2mm3 spatial resolution and a factor of 2 interleaved partition undersampling on a 3T scanner (MAGNETOM Vida, Siemens Healthcare, Erlangen, Germany). The total acquisition time was ~5.5 min. B1 maps were acquired first, with reconstruction performed by the standard Siemens processing. B1 maps were automatically transferred to the Kubernetes cluster for use in the B1-corrected reconstruction [7].

Acquired data was sent from the scanner to the remote Gadgetron cluster using the Framework for Image Reconstruction Environment (FIRE) prototype with an SSH tunnel to a load-balancing jump node managing cluster ingress. SNR-constrained data compression was tested to evaluate possible reduction of necessary network bandwidth without substantial image degradation [9]. Data was then reconstructed by a Gadgetron appliance, with reconstructed maps returned via the same SSH tunnel.The network infrastructure and gadgets used within our reconstruction pipeline is shown in Figure 1.

Primary considerations for our remote reconstruction implementation were reconstruction quality, B1 mapping support, processing speed, and network bandwidth. Bandwidth was measured via Azure’s Kubernetes metrics interface. Processing speed was measured by timing the delay acquisition completion and image availability on the scanner’s host interface - which also reflects the delay between acquisition and the availability of the DICOM reconstructed images.

Results

In Figure 2, previously validated offline reconstructions in MATLAB(R2021a, The MathWorks, Natick/MA, USA) [1] were compared to the proposed Gadgetron implementation both visually and via comparison of the reconstructed T1 and T2 values, and our results suggest a good match between the two approaches.

In Figure 3, the impact of the implemented online B1 correction method is demonstrated for in-vivo acquisitions, with inclusion of B1 mapping data in the online reconstruction yielding visible correction in T2 maps, with no change in reconstruction time.

Figure 4 demonstrates the differences in T1 and T2 maps caused by SNR-constrained data compression.With compression disabled, phantom reconstructions finished in 52 seconds with mean network throughput of 138Mbps, versus 54 seconds and 70Mbps at 1% SNR compression error tolerance. Versus the maps generated with uncompressed data, compression introduced 0.00±0.09% error in T1 values and 0.00±0.04% error in T2 values.
In-vivo, online reconstructions without compression finished in 55 seconds with mean bandwidth of 212Mbps, compared to 56 seconds and throughput of 75Mbps with 1% SNR compression error tolerance. The increased bandwidth for in-vivo can be attributed to the use of additional coils. Versus the maps generated with uncompressed data, compression introduced 0.01±0.2% error in T1 values and 0.02±1.17% error in T2 values.

Figure 5 demonstrates the pipeline’s ability to reconstruct 32-channel head coil acquisitions without substantially increased delay. Compared to the 20-channel head coil, with mean in vivo reconstruction delay of 55 seconds and mean network throughput of 212Mbps, the 32-channel coil acquisition’s reconstruction delay was 1m33 seconds with mean throughput of 368Mbps without compression, 98Mbps with 1% compression error tolerance.

Discussion

Reconstruction of 3D-MRF can be performed remotely on a GPU-accelerated Kubernetes cluster with 1-1.5 minute between acquisition completion and availability of quantitative maps at the scanner. Reconstruction is accelerated by GPU implementations of the NUFFT, pattern matching, and coil combination steps, and image quality is improved through online B1 correction. The proposed pipeline simplifies deployment to new scanners by eliminating the need for on-site computing resources and unified cloud computational resource management streamlines reconstruction software updates. Finally, 3D-MRF seems to be extremely insensitive to SNR-based data compression, implying that sites with relatively slow (sub-100Mbps) internet connections should be able to make use of cloud-based reconstructions without compromising image quality. Therefore, this work demonstrates that cloud reconstruction of 3D-MRF datasets is not only feasible, but may also improve processing time and deployability of the technique in research and clinical workflows.

Acknowledgements

This work was supported by Siemens Healthineers and Microsoft Corporation.

References

  1. Chen Y, et al. Rapid 3D MR Fingerprinting Reconstruction using a GPU-Based Framework. ISMRM 2020.

  2. Hansen MS, Sørensen TS. Gadgetron: An Open Source Framework for Medical Image Reconstruction. Magn Reson Med. 2013 Jun;69(6):1768-76.

  3. Xue H, Inati S, Sørensen TS, Kellman P, Hansen MS. Distributed MRI Reconstruction Using Gadgetron-Based Cloud Computing. Magn Reson Med. 2015 Mar;73(3):1015–1025.

  4. Xue H, Inati S, Sørensen TS, Kellman P, Hansen MS. Distributed MRI reconstruction using Gadgetron-based cloud computing. Magn Reson Med. 2015 Mar;73(3):1015-25. doi: 10.1002/mrm.25213. Epub 2014 Mar 31. PMID: 24687458; PMCID: PMC6328377.

  5. Microsoft Azure; https://azure.microsoft.com/en-us/

  6. Kubernetes (K8s): Production-Grade Container Scheduling and Management. Kubernetes Github Repository, 2021 Nov; https://github.com/kubernetes/kubernetes

  7. Chung S, Kim D, Breton E, Axel L. Rapid B1+ mapping using a preconditioning RF pulse with TurboFLASH readout. Magn Reson Med. 2010 Aug;64(2):439-46. doi: 10.1002/mrm.22423. PMID: 20665788; PMCID: PMC2929762.

  8. Gadgetron-Azure: Sample code for deploying Gadgetron image reconstruction in Azure. Azure-Gadgetron Github Repository, 2021 Nov; https://github.com/microsoft/gadgetron-azure

  9. Restivo MC, Campbell-Washburn AE, Kellman P, Xue H, Ramasawmy R, Hansen MS. A framework for constraining image SNR loss due to MR raw data compression. MAGMA. 2019 Apr;32(2):213-225. doi: 10.1007/s10334-018-0709-5. Epub 2018 Oct 25. PMID: 30361947; PMCID: PMC8351621.

  10. Chow K, Kellman P, Xue H. Prototyping Image Reconstruction and Analysis with FIRE. SCMR 2021.

Figures

System architecture for the online Kubernetes-based 3D-MRF Reconstruction. Data is sent from the scanner to Azure via an SSH tunnel between the scanner's host and an SSH jump pod within the Kubernetes cluster. One or multiple GPU-enabled nodes then share the load of storing temporary dependencies and reconstructing datasets that arrive on the cluster. Logs are stored to a persistant Prometheus appliance for debugging and monitoring purposes.

Comparison of the presented pipeline's phantom reconstructions against both our previously-presented offline 3D-MRF reconstruction in Matlab and the NIST-provided relaxation specifications for the system phantom used in the experiment.


Comparison of reconstructions performed on uncompressed versus compressed raw datasets for both a phantom and in-vivo acquisition. In the phantom data, compression introduced 0.00±0.09% error in T1 values and 0.00±0.04% error in T2 values versus the uncompressed data, while in-vivo compression introduced 0.01±0.2% error in T1 values and 0.02±1.17% error in T2 values.

In-Vivo quantitative maps demonstrating the pipeline's ability to perform online B1 correction as a B1 prescan is completed during the same acquisition and was forwarded to the Kubernetes cluster.

In-Vivo quantitative maps demonstrating the pipeline's support for larger numbers of coils (here, with a 32 channel headcoil) without a substantial difference in reconstruction speed of increase in network bandwidth or quantitative map once compressed.