Our CVPR 2019 Papers
March 27, 2019

We recently got two papers accepted in the IEEE Conference on Computer Vision and Pattern Recognition which will take place in Long Beach, California, from June 16-20. With over 5000 submissions and 1300 accepted papers (acceptance rate is ~25%), CVPR is one of the largest, top-tier conferences for Computer Vision and Machine Learning. Below you find the abstract of our two submissions. Keep an eye on our publication page as it will be updated with new materials in the coming weeks.
Learning to Extract Flawless Slow Motion from Blurry Videos
by M. Jin, Z. Hu and P. Favaro
In this paper, we introduce the task of generating a sharp slow-motion video given a low frame rate blurry video. We propose a data-driven approach, where the training data is captured with a high frame rate camera and blurry images are simulated through an averaging process. While it is possible to train a neural network to recover the sharp frames from their average, there is no guarantee of the temporal smoothness for the formed video, as the frames are estimated independently. To solve this problem we introduce two networks: One, DeblurNet, to predict sharp keyframes and the second, InterpNet, to predict intermediate frames between the generated keyframes. To address the temporal smoothness requirement we obtain two sets of keyframes from two subsequent blurry input images and then apply InterpNet between all subsequent pairs of keyframes, including the case where one keyframe is generated from one blurry image and the other keyframe is generated from the other blurry image. Therefore, a smooth transition is ensured by interpolating between consecutive keyframes using InterpNet. Moreover, the proposed scheme enables further increase in frame rate without retraining the network, by applying InterpNet recursively between pairs of sharp frames. We demonstrate the performance of our approach in increasing the frame rate of real blurry videos up to 20 times. We evaluate also several datasets, including a novel dataset captured with a Sony RX V camera, which we will make publicly available.
On Stabilizing Generative Adversarial Training with Noise

We present a novel method and analysis to train generative adversarial networks (GAN) in a stable manner. As shown in recent analysis, training is often undermined by the limited support of the probability distribution of the data samples. We notice that the distributions of real and generated data should match even when they undergo the same filtering. Therefore, to address the limited support problem we propose to train GANs by using different filtered versions of the real and generated data distributions. In this way, filtering does not prevent the exact matching of the data distribution, while helping training by extending the support of both distributions. As filtering we consider adding samples from an arbitrary distribution to the data, which corresponds to a convolution of the data distribution with the arbitrary one. We also propose to learn the generation of these samples so as to challenge the discriminator in the adversarial training. We show that our approach results in a stable and well-behaved training of even the original minimax GAN formulation. Moreover, our technique can be incorporated in most modern GAN formulations and leads to a consistent improvement on several common datasets.

Graduation Announcements
Jan. 28, 2019

We proudly announce the graduation for two of our group members.

Dr. Meiguang Jin, who started his PhD in 2014, had his defense in december last year with the title "Motion Deblurring from a Single Image" and is currently working with us as a Post-doc until he returns to China. He recently received the Alumni Award from our Institute for his outstanding dissertation. 

Dr. Mehdi Noroozi held his defense presentation "Beyond Supervised Representation Learning" last week on the closing day of the CUSO Deep Learning Winter School. He will continue his research at Bosch in Germany. 

Congratulations to both on your achievement. We wish you a successful and bright life ahead. 

Deep Learning in Lenk
Jan. 27, 2019

In one exciting week, the 40 participants of the CUSO Deep Learning Winter School had a chance to catch up on the newest developments in the world of Deep Learning. The school covered a wide range of topics, including theoretial background, unsupervised learning, generative models, language modelling and more. With inspiring speakers such as Alyosha Efros, René Vidal, Paolo Favaro and others the school offered the students an opportunity to connect and discuss - and hopefully spark new and exciting ideas for future works. 

The school was organized by Paolo Favaro, François Fleuret and Marcus Liwicki under the Doctoral Program in Computer Science of the CUSO universities. Head over to the school's website to watch recordings of the talks and download presentation slides. 

Winter School on Deep Learning
Nov. 30, 2018

From Jan. 21 - 25, 2019, the winter school will bring together international experts in deep learning to share their expertise and provide a first-hand account of the fascinating developments and opportunities in this area. The school is organized by Marcus Liwicki, François Fleuret and Paolo Favaro under the Doctoral Program in Computer Science of the CUSO universities. It is primarily targeted at PhD students in computer science in these universities. 

Among the speakers are

  • Alexei Efros, Professor at UC Berkley
  • Jan Koutník, Director of Intelligent Automation, NNAISENSE
  • Moustapha Cisse, Head of Google AI center in Accra, Ghana
  • Klaus Greff, Ph.D. student at IDSIA
  • and more 

The registration will open shortly. If you are interested, please visit the website for more information.

Machine Learning Is Reaching for the Stars
Nov. 29, 2018

The National Centre of Competence in Research PlanetS is organizing a workshop on Machine Learning from Feb. 13 - 15, 2019 at the Geneva Observatory, University of Geneva. The three-day event will introduce astronomers to the practical use of Machine Learning for astronomical data. Specifically, it will cover these subjects: 

  • Supervised: Deep Learning for transit detection
  • Supervised: from linear regression, through XGBoots to AutoML and SVM
  • Unsupervised: Dimensionality Reduction from PCA to T-SNE

Prof. Paolo Favaro and PhD student Simon Jenni are invited to speak at the event. The registration is currently open, but limited to 30 participants. For more information, visit the workshop website here.

updated Feb. 22, 2019