Publication

Latent space mapping for generation of object elements with corresponding data annotation

Bazrafkan, Shabab
Javidnia, Hossein
Corcoran, Peter
Citation
Bazrafkan, Shabab, Javidnia, Hossein, & Corcoran, Peter. (2018). Latent space mapping for generation of object elements with corresponding data annotation. Pattern Recognition Letters, 116, 179-186. doi: https://doi.org/10.1016/j.patrec.2018.10.025
Abstract
Deep neural generative models such as Variational Auto-Encoders (VAE) and Generative Adversarial Networks (GAN) give promising results in estimating the data distribution across a range of machine learning fields of application. Recent results have been especially impressive in image synthesis where learning the spatial appearance information is a key goal. This enables the generation of intermediate spatial data that corresponds to the original dataset. In the training stage, these models learn to decrease the distance of their output distribution to the actual data and, in the test phase, they map a latent space to the data space. Since these models have already learned their latent space mapping, one question is whether there is a function mapping the latent space to any aspect of the database for the given generator. In this work, it has been shown that this mapping is relatively straightforward using small neural network models and by minimizing the mean square error. As a demonstration of this technique, two example use cases have been implemented: firstly, the idea to generate facial images with corresponding landmark data and secondly, generation of low-quality iris images (as would be captured with a smartphone user-facing camera) with a corresponding ground-truth segmentation contour. (C) 2018 Elsevier B.V. All rights reserved.
Publisher
Elsevier
Publisher DOI
Rights
CC BY-NC-ND 3.0 IE