Attila Szabó will defend his thesis with the title Learning Interpretable Representations of Images on June 24, 15:00 at Engehaldenstrasse 8 in room 002.
Computers represent images with pixels and each pixel contains three numbers for red, green and blue colour values. These numbers are meaningless for humans and they are mostly useless when used directly with classical machine learning techniques like linear classifiers. Interpretable representations are the attributes that humans understand: the colour of the hair, viewpoint of a car or the 3D shape of the object in the scene. Many computer vision tasks can be viewed as learning interpretable representations, for example a supervised classification algorithm directly learns to represent images with their class labels. In this work we aim to learn interpretable representations (or features) indirectly with lower levels of supervision. This approach has the advantage of cost savings on dataset annotations and the flexibility of using the features for multiple follow-up tasks. We made contributions in three main areas: weakly supervised learning, unsupervised learning and 3D reconstruction. In the weakly supervised case we use image pairs as supervision. Each pair shares a common attribute and differs in a varying attribute. We propose a training method that learns to separate the attributes into separate feature vectors. These features then are used for attribute transfer and classification. We also show theoretical results on the ambiguities of the learning task and the ways to avoid degenerate solutions. We show a method for unsupervised representation learning, that separates semantically meaningful concepts. We explain and show ablation studies how the components of our proposed method work: a mixing autoencoder, a generative adversarial net and a classifier. We propose a method for learning single image 3D reconstruction. It is done using only the images, no human annotation, stereo, synthetic renderings or ground truth depth map is needed. We train a generative model that learns the 3D shape distribution and an encoder to reconstruct the 3D shape. For that we exploit the notion of image realism. It means that the 3D reconstruction of the object has to look realistic when it is rendered from different random angles. We prove the efficacy of our method from first principles.
We present a novel method and analysis to train generative adversarial networks (GAN) in a stable manner. As shown in recent analysis, training is often undermined by the limited support of the probability distribution of the data samples. We notice that the distributions of real and generated data should match even when they undergo the same ﬁltering. Therefore, to address the limited support problem we propose to train GANs by using different ﬁltered versions of the real and generated data distributions. In this way, ﬁltering does not prevent the exact matching of the data distribution, while helping training by extending the support of both distributions. As ﬁltering we consider adding samples from an arbitrary distribution to the data, which corresponds to a convolution of the data distribution with the arbitrary one. We also propose to learn the generation of these samples so as to challenge the discriminator in the adversarial training. We show that our approach results in a stable and well-behaved training of even the original minimax GAN formulation. Moreover, our technique can be incorporated in most modern GAN formulations and leads to a consistent improvement on several common datasets.
We proudly announce the graduation for two of our group members.
Dr. Meiguang Jin, who started his PhD in 2014, had his defense in december last year with the title "Motion Deblurring from a Single Image" and is currently working with us as a Post-doc until he returns to China. He recently received the Alumni Award from our Institute for his outstanding dissertation.
Dr. Mehdi Noroozi held his defense presentation "Beyond Supervised Representation Learning" last week on the closing day of the CUSO Deep Learning Winter School. He will continue his research at Bosch in Germany.
Congratulations to both on your achievement. We wish you a successful and bright life ahead.
In one exciting week, the 40 participants of the CUSO Deep Learning Winter School had a chance to catch up on the newest developments in the world of Deep Learning. The school covered a wide range of topics, including theoretial background, unsupervised learning, generative models, language modelling and more. With inspiring speakers such as Alyosha Efros, René Vidal, Paolo Favaro and others the school offered the students an opportunity to connect and discuss - and hopefully spark new and exciting ideas for future works.
The school was organized by Paolo Favaro, François Fleuret and Marcus Liwicki under the Doctoral Program in Computer Science of the CUSO universities. Head over to the school's website to watch recordings of the talks and download presentation slides.
From Jan. 21 - 25, 2019, the winter school will bring together international experts in deep learning to share their expertise and provide a first-hand account of the fascinating developments and opportunities in this area. The school is organized by Marcus Liwicki, François Fleuret and Paolo Favaro under the Doctoral Program in Computer Science of the CUSO universities. It is primarily targeted at PhD students in computer science in these universities.
Among the speakers are
The registration will open shortly. If you are interested, please visit the website for more information.