Coupled Dictionary Learning for Image Analysis

Abstract

Modern imaging technologies provide different ways to visualize various objects ranging from molecules in a cell to the tissue of a human body. Images from different imaging modalities reveal distinct information about these objects. Thus a common problem in image analysis is how to relate different information about the objects. For instance, relating protein locations from fluorescence microscopy and the protein structures from electron microscopy. These problems are challenging due to the difficulties in modeling the relationship between the information from different modalities. In this dissertation, a coupled dictionary learning based image analogy method is first introduced to synthesize images in one modality from images in another. As a result, using my method multi-modal registration (for example, registration between correlative microscopy images) is simplified to a mono-modal one. Furthermore, a semi-coupled dictionary learning based framework is proposed to estimate deformations from image appearances. Moreover, a coupled dictionary learning method is explored to capture the relationship between GTPase activations and cell protrusions and retractions. Finally, a probabilistic model is proposed for robust coupled dictionary learning to address learning a coupled dictionary with non-corresponding data. This method discriminates between corresponding and non-corresponding data thereby resulting in a “clean” coupled dictionary by removing non-corresponding data during the learning process.

Tian Cao
Tian Cao
Ph.D. in Computer Science

Related