Creation of Voxelwise 2D Lookup Tables (LUTs) for MRF-based Synthesis of Qualitative Images
Andrew Dupuis1, Yong Chen2, Mark A Griswold1,2, and Rasim Boyacioglu2
1Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States, 2Radiology, School of Medicine, Case Western Reserve University, Cleveland, OH, United States

Synopsis

Motivation: Address the challenge of integrating MRF data into existing MRI analysis via synthetic images without introducing spatial artifacts or hallucinations possible with CNNs.

Goal(s): Generate static lookup tables (LUTs) mapping from T1/T2 value space directly to grayscale visualizations matching clinical contrasts.

Approach: A simple pixel-wise regression network was trained on a public dataset of MRF data and weighted images. Static LUTs were generated from dictionaries of T1/T2 combinations, then applied to MRF-derived maps for visualization and processing via FSL.

Results: Successful generation of synthetic contrast LUTs ensures reproducibility and allows instantaneous visualization or registration of MRF maps in a more conventional grayscale format.

Impact: Integration of MRF into traditional analysis pipelines suffers because quantitative maps have inherently different contrasts from weighted images. LUTs for instant, deterministic generation of weighted contrasts from T1/T2 maps allow for direct use of tools like FSL with MRF data.

Introduction

Magnetic Resonance Fingerprinting (MRF) enables precise quantitative mapping of T1 and T2 relaxation times in an efficient single scan [1]. However, the integration of MRF-derived data into traditional MRI analysis pipelines remains a challenge due to the inherently different contrast characteristics of quantitative maps versus weighted images.

Contrast synthesis often consists of direct Bloch simulation of a desired contrast state or utilizes convolutional [2,3] or patch-based [4] neural networks for image-to-image translation tasks. While CNNs can produce high-fidelity results [5], they can also be susceptible to hallucination artifacts [6], where unintended features or spatial errors are introduced. Instead, we propose a pixel-wise regression network that, after training, behaves akin to a colormap through use of a 2D lookup table (LUT) for MRF T1 and T2 maps.

Methods

A public multimodal brain imaging dataset [7] consisting of MRF maps, T1-weighted (T1w) Magnetization-Prepared Rapid Gradient Echo (MPRAGE) images, and T2-weighted (T2w) Turbo Spin Echo (TSE) images for 10 healthy volunteers was used to train our regression network. All code for our lookup table system, regression network design, training and inference is publicly available [cite] under a research license. We utilize TensorFlow and Keras for model training. A fully connected network model was defined, consisting of a normalization layer, a dense hidden layer with 64 nodes, and an output layer. Training data was masked to remove voxels outside of the skull and weighted images were oriented to match MRF maps based on NIFTI headers. For each MRF dataset, an initial Bloch-simulation-based synthetic image matching the target qualitative dataset was generated and a coarse linear registration was performed using ITK. Target images were normalized by their maximum value to improve training stability.

Voxel data from 8 subjects was linearized, then split into training (80%) and testing (20%) sets before training with a Mean Absolute Error (MAE) loss function ensuring the predicted synthetic images closely matched the normalized intensity values of the weighted images. Model structure and weights were archived, but the primary training output is a static two-dimensional LUT with input indices of T1 and T2 times in 1ms steps across a range of values [T1:100-5000ms, T2: 1-500ms] generated by running all combinations within the dictionary through the inference network. The resulting LUT removes any dependency on Tensorflow from our reconstruction environment, ensuring that contrast synthesis performance is consistent and repeatable regardless of future changes to the inference package used.

Synthetic T1w and T2w images were generated for the 2 remaining subjects to validate the performance of the generated LUTs. Additionally, in-vivo MRF datasets were acquired for two new subjects and reconstructed with dictionary-space quadratic interpolation in the pattern matching step. Finally, FSL was used to compare brain extraction performance using T1/T2 maps alone against LUT-derived synthetic images.

Results

Figure 2 demonstrates 6 different 2D T1/T2 LUT’s, including linear T1 and T2 intensity profiles as well as Bloch-derived and regression-derived MPRAGE and TSE profiles. Figure 3 demonstrates the results of applying our regression-generated LUT profiles to a subject’s T1/T2 map pair in comparison to clinical MPRAGE and TSE contrasts. A dataset that had been reconstructed with a very coarse (5% progressive step-size) MRF dictionary was used to demonstrate the discretization artifacts that can occur if excessive discretization is present in the input quantitative maps. Figure 4 shows the improved fidelity in synthetic images as a result of using quadratic interpolation in T1/T2 space during pattern matching to reduce discretization artifacts. Figure 5 demonstrates the performance of FSL’s BET brain extraction tool when given T1 maps alone, T2 maps alone, or synthetic MPRAGE created by applying the appropriate LUT to the same input data prior to running BET.

Discussion

To address the limitations of traditional CNN-based approaches, we implemented a pixel-wise regression network that takes MRF-derived T1 and T2 maps as inputs and undergoes supervised training against co-registered MPRAGE or TSE images acquired during the same scanning session. The training process creates a unique mapping from quantitative map value tensors to specific intensities in a target qualitative contrast.

After training and 2D lLUT generation, synthetic contrasts can be generated instantly. Overall, contrast similarity versus the target qualitative modalities is adequate for use in existing software tools for co-registration, skull stripping, or similar as shown in Figure 5. Importantly, since contrast performance of an exported lookup table is deterministic, the reproducibility of output synthetic contrasts is identical to the reproducibility of the MRF maps used in the generation process.

Acknowledgements

No acknowledgement found.

References

[1] Ma, D., Gulani, V., Seiberlich, N., Liu, K., Sunshine, J. L., Duerk, J. L., & Griswold, M. A. (2013). Magnetic resonance fingerprinting. Nature, 495(7440), 187-192.
[2] Virtue, P., Tamir, J. I., Doneva, M., Yu, S. X., & Lustig, M. (2018). Learning contrast synthesis from MR fingerprinting. In Proc. 26th Annu. Meeting (ISMRM). icsi. berkeley. edu (Vol. 676).
[3] Nykänen, O., Nevalainen, M., Casula, V., Isosalo, A., Inkinen, S. I., Nikki, M., ... & Nieminen, M. T. (2023). Deep‐Learning‐Based Contrast Synthesis From MRF Parameter Maps in the Knee Joint. Journal of Magnetic Resonance Imaging, 58(2), 559-568.
[4] Zhang, X., Flassbeck, S., & Assländer, J. MRI contrast synthesis from low-rank coefficient images. Proceedings of the ISMRM Annual Meeting, 2022
[5] Chaudhari, A. S., Sandino, C. M., Cole, E. K., Larson, D. B., Gold, G. E., Vasanawala, S. S., ... & Langlotz, C. P. (2021). Prospective deployment of deep learning in MRI: a framework for important considerations, challenges, and recommendations for best practices. Journal of Magnetic Resonance Imaging, 54(2), 357-371.
[6] Muckley, M. J., Riemenschneider, B., Radmanesh, A., Kim, S., Jeong, G., Ko, J., ... & Knoll, F. (2020). State-of-the-art machine learning MRI reconstruction in 2020: Results of the second fastMRI challenge. arXiv preprint arXiv:2012.06318, 2(6), 7.
[7] Dupuis, A., Chen, Y., Hansen, M., Chow, K., Sun, J. E. P., Badve, C., Ma, D., Griswold, M. A., & Boyacioglu, R. (2023). Intrasession, Intersession, and Interscanner Qualitative and Quantitative MRI Datasets of Healthy Brains at 3.0T (0.1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.8183344

Figures

Figure 1: Generation of the weighted contrast lookup tables was performed using a simple 1-layer, 64-node voxelwise regression network with 2 inputs (voxel T1 value, voxel T2 value) trained against a single output (voxel weighted contrast intensity). After training of the network, a lookup table was generated by using MRF dictionary T1/T2 value combinations as inputs to the inference process, resulting in a LUT from dictionary space to weighted contrast space. Once generated, LUTs can be applied directly to T1/T2 map pairs to instantly visualize the data as a synthetic weighted image.

Figure 2: Sample generated grayscale lookup tables with corresponding visualizations of a single T1/T2 map pair. Each lookup table accepts T1 and T2 data as input and yields grayscale intensities. Top: Linear T1; Bloch-derived MPRAGE; Regression-derived MPRAGE; Bottom: Linear T2; Bloch-derived TSE; Regression-derived TSE

Figure 3: Sample images showing the same T1/T2 maps visualized using the regression-generated LUTs versus the registered clinical MPRAGE and TSE qualitative images. Note that there is posterization present in the generated synthetic images due to the relatively coarse dictionary used during the MRF pattern match reconstruction. Artifacts present in the T1/T2 source maps, such as posterization or uncorrected B1 inhomogeneity, will be directly reflected once a lookup table is applied.

Figure 4: Detailed image quality example for an unseen/untrained subject scanned on a different scanner and reconstructed with dictionary-space quadratic interpolation during the pattern matching process. Note the substantially reduced posterization thanks to the more continuous distribution of T1/T2 values within the source maps.

Figure 5: Results from brain extraction performed using FSL’s BET tool with (from left to right) T1 map, T2 map, and LUT-generated MPRAGE images as input. The results above represent the best combination of BETFrac and BETVerticalGradient inputs found for each input type. Note that even in these single-slice examples, both the T1 and T2 input cases have substantial artifacts remaining around the brain surface, while the synthetic input yields a much tighter margin around the brain’s surface.