We hit the bullseye again, twice this time. The two papers by Givi Meishvili and Simon Jenni got accepted to one of the largest, top-tier conferences in the field of Computer Vision: CVPR. Not only that... but also both papers got classified as orals, meaning they fall into the top 25% of papers that get the opportunity to be presented in a 5 min. talk at the conference. Congratulations to this achievement. The titles and abstracts are listed below. We will update the publication section on our website soon with the papers and supplementary material.
Steering Self-Supervised Feature Learning Beyond Local Pixel Statistics
Simon Jenni, Hailin Jin and Paolo Favaro
We introduce a novel principle for self-supervised feature learning based on the discrimination of specific transformations of an image. We argue that the generalization capability of learned features depends on what image neighborhood size is sufficient to discriminate different image transformations: The larger the required neighborhood size and the higher the order of the image statistics that the feature can describe. An accurate description of higher-order image statistics allows to better represent the shape and configuration of objects and their context, which ultimately generalizes better to new tasks such as object classification and detection. This suggests a criterion to choose and design image transformations. Based on this criterion, we introduce a novel image transformation that we call limited context inpainting (LCI). This transformation inpaints an image patch conditioned only on a small rectangular pixel boundary (the limited context). Because of the limited boundary information, the inpainter can learn to match local pixel statistics, but is unlikely to match the global statistics of the image. We claim that the same principle can be used to justify the performance of transformations such as image rotations and warping. Indeed, we demonstrate experimentally that learning to discriminate transformations such as LCI, image warping and rotations, yields features with state of the art generalization capabilities on several datasets such as Pascal VOC, STL-10, CelebA, and ImageNet. Remarkably, our trained features achieve a higher performance on Places than features trained through supervised learning with ImageNet labels.
Learning to Have an Ear for Face Super-Resolution
Givi Meishvili, Simon Jenni and Paolo Favaro
We propose a novel method to use both audio and a low-resolution image to perform extreme face super-resolution (a 16x increase of the input size). When the resolution of the input image is very low (e.g., 8x8 pixels), the loss of information is so dire that important details of the original identity have been lost and audio can aid the recovery of a plausible high-resolution image. In fact, audio carries information about facial attributes, such as gender and age. Moreover, if an audio track belongs to an identity in a known training set, such audio might even help to restore the original identity. Towards this goal, we propose a model and a training procedure to extract information about the face of a person from her audio track and to combine it with the information extracted from her low-resolution image, which relates more to pose and colors of the face. We demonstrate that the combination of these two inputs yields high-resolution images that better capture the correct attributes of the face. In particular, we show experimentally that audio can assist in recovering attributes such as the gender, the age and the identity, and thus improve the correctness of the image reconstruction process. Our procedure does not make use of human annotation and thus can be easily trained with existing video datasets. Moreover, we show that our model builds a factorized representation of images and audio as it allows one to mix low-resolution images and audio from different videos and to generate realistic faces with semantically meaningful combinations.
Google organizes a workshop to bring together key researchers from academia and Google to exchange ideas and forge new collaborations. The key theme of the workshop is computational imaging, which aims to produce visual representations of data and physical processes beyond what current imaging instruments can do today by simultaneously designing algorithms and hardware.
Prof. Paolo Favaro is invited to speak at the event and presents the latest work in his group on image deblurring from the classic model-based methods to the more recent deep learning approaches.
Our research  contributed to the discovery of important patterns in the EEG signals of coma patients. Read more about it in the official news article here.
S. Jonas, A. Rossetti, M. Oddo, S. Jenni, P. Favaro and F. Zubler, "EEG-based Outcome Prediction after Cardiac Arrest with Convolutional Neural Networks: Performance and Visualization of Discriminative Features", in Human Brain Mapping, 2019.
PhD student Adam Bielski just got his paper accepted (with spotlight!) in the upcoming NeurIPS conference. It is his first publication since he started the PhD in our group. Congratulations on your excellent work.
Please find the abstract below and keep an eye on our publications page as it will get updated with details about the NeurIPS submission.
We introduce a novel framework to build a model that can learn how to segment objects from a collection of images without any human annotation. Our method builds on the observation that the location of object segments can be perturbed locally relative to a given background without affecting the realism of a scene. Our approach is to first train a generative model of a layered scene. The layered representation consists of a background image, a foreground image and the mask of the foreground. A composite image is then obtained by overlaying the masked foreground image onto the background. The generative model is trained in an adversarial fashion against a discriminator, which forces the generative model to produce realistic composite images. To force the generator to learn a representation where the foreground layer corresponds to an object, we perturb the output of the generative model by introducing a random shift of both the foreground image and mask relative to the background. Because the generator is unaware of the shift before computing its output, it must produce layered representations that are realistic for any such random perturbation. Finally, we learn to segment an image by defining an autoencoder consisting of an encoder, which we train, and the pre-trained generator as the decoder, which we freeze. The encoder maps an image to a feature vector, which is fed as input to the generator to give a composite image matching the original input image. Because the generator outputs an explicit layered representation of the scene, the encoder learns to detect and segment objects. We demonstrate this framework on real images of several object categories.