SA-LuT-Nets: Learning Sample-adaptive Intensity Lookup Tables for Brain Tumor Segmentation

Publication Name

IEEE Transactions on Medical Imaging

Abstract

In clinics, the information about the appearance and location of brain tumors is essential to assist doctors in diagnosis and treatment. Automatic brain tumor segmentation on the images acquired by magnetic resonance imaging (MRI) is a common way to attain this information. However, MR images are not quantitative and can exhibit significant variation in signal depending on a range of factors, which increases the difficulty of training an automatic segmentation network and applying it to new MR images. To deal with this issue, this paper proposes to learn a sample-adaptive intensity lookup table (LuT) that dynamically transforms the intensity contrast of each input MR image to adapt to the following segmentation task. Specifically, the proposed deep SA-LuT-Net framework consists of a LuT module and a segmentation module, trained in an end-to-end manner: the LuT module learns a sample-specific nonlinear intensity mapping function through communication with the segmentation module, aiming at improving the final segmentation performance. In order to make the LuT learning sample-adaptive, we parameterize the intensity mapping function by exploring two families of non-linear functions (i.e., piece-wise linear and power functions) and predict the function parameters for each given sample. These sample-specific parameters make the intensity mapping adaptive to samples. We develop our SA-LuT-Nets separately based on two backbone networks for segmentation, i.e., DMFNet and the modified 3D Unet, and validate them on BRATS2018 and BRATS2019 datasets for brain tumor segmentation. Our experimental results clearly demonstrate the superior performance of the proposed SA-LuT-Nets using either single or multiple MR modalities. It not only significantly improves the two baselines (DMFNet and the modified 3D Unet), but also wins a set of state-of-the-art segmentation methods. Moreover, we show that, the LuTs learnt using one segmentation model could also be applied to improving the performance of another segmentation model, indicating the general segmentation information captured by LuTs.

Open Access Status

This publication is not available as open access

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/TMI.2021.3056678