Master's thesis presentation: Erik Sandström
Kontakt: henning [dot] petzka [at] math [dot] lth [dot] se
Spara händelsen till din kalender
Latent Space Growing of Generative Adversarial Networks (Tolkningsbara Representationer av Ansikten med Artificiell Intelligens)
This thesis presents a system, which builds on the Generative Adversarial Network (GAN) framework with the focus of learning interpretable representations of data. The system is able to learn representations of data that are ordered in regards to the saliency of the attributes, in a completely unsupervised manner. The training strategy expands the latent space dimension while appropriately adding capacity to the model in a controlled way. This builds on the intuition that highly salient attributes are easiest to learn first. Empirical results on the Swiss roll dataset show that the representation is structured in regards to the saliency of the attributes when training the latent space progressively on a very simple GAN architecture. Experiments using a more complex system, trained on the CelebA dataset, scales the idea to a more interesting use case. Experiments using latent space interpolations show that this is a promising direction for future research in learning interpretable representations.