Votenet+: An Improved Deep Learning Label Fusion Method for Multi-Atlas Segmentation

Abstract

In this work, we improve the performance of multi-atlas segmentation (MAS) by integrating the recently proposed VoteNet model with the joint label fusion (JLF) approach. Specifically, we first illustrate that using a deep convolutional neural network to predict atlas probabilities can better distinguish correct atlas labels from incorrect ones than relying on image intensity difference as is typical in JLF. Motivated by this finding, we propose VoteNet+, an improved deep network to locally predict the probability of an atlas label to differ from the label of the target image. Furthermore, we show that JLF is more suitable for the VoteNet framework as a label fusion method than plurality voting. Lastly, we use Platt scaling to calibrate the probabilities of our new model. Results on LPBA40 3D MR brain images show that our proposed method can achieve better performance than VoteNet.

Publication
17th IEEE International Symposium on Biomedical Imaging, ISBI 2020, Iowa City, IA, USA, April 3-7, 2020
Zhipeng Ding
Zhipeng Ding
Ph.D. in Computer Science

My research interests include image registration and machine learning.

Xu Han
Xu Han
Ph.D. in Computer Science

My research interests include image registration for images with pathologies and machine learning.

Marc Niethammer
Marc Niethammer
Professor of Computer Science

My research interests include image registration, image segmentation, shape analysis, machine learning, and biomedical applications.

Related