Abdelhak Lemkhenter and Paolo Favaro, in German Conference on Pattern Recognition (GCPR), 2020.
In this work, we introduce Phase-Swap, a novel self-supervised learning task for bio-signals. Most hand-crafted features for bio-signals in general, and EEG in particular, are derived from the power-spetrum, e.g. considering the energy of the signal within predefined frequency bands. In fact, most often the phase information is discared as it is more sample specific, and thus more sensitive to noise, than the amplitude. However, various medical studies have shown the link between the phase component and various physiological patterns such as cognitive functions in the case of brain activity. Motivated by this line of research, we build a self-supervised task that encourages the trained models to learn the implicit phase-amplitude coupling. This task, named Phase Swap, consists of discriminating between real samples and samples for which the phase component in the fourrier domain was swapped out by one taken from another sample. We show that the learned self-supervised features generalize better across experimental settings and subject identities compared to a supervised baseline for two classification tasks, seizure
detection and sleep scoring, on four different dataset: ExpandedEDF (Sleep Cassette + Sleep Telemetry), CHB-MIT and the ISRUC-Sleep data set. These findings highlight the benefits our self-supervised pretraining for various machine learning applications for bio-signals.
Simon Jenni and Paolo Favaro, in Asian Conference on Computer Vision (ACCV), 2020.
Current state of the art methods cast monocular 3D human pose estimation as a learning problem by training neural networks on costly large data sets of images and corresponding skeleton poses. In contrast, we propose an approach that can exploit small annotated data sets by fine-tuning networks pre-trained via self-supervised learning on (large) unlabeled data sets. To drive such models in the pre-training step towards supporting 3D pose estimation, we introduce a novel self-supervised feature learning task designed to focus on the 3D structure in an image. We exploit images extracted from videos captured with a multi-view camera system. The task is to classify whether two images depict two views of the same scene up to a rigid transformation. In a multi-view data set, where objects deform in a non rigid manner, a rigid transformation occurs only between two views taken at the exact same time, i.e., when they are synchronized. We demonstrate the effectiveness of the synchronization task on the Human3.6M data set and achieve state of the art results in 3D human pose estimation.
We hit the bullseye again, twice this time. The two papers by Givi Meishvili and Simon Jenni got accepted to one of the largest, top-tier conferences in the field of Computer Vision: CVPR. Not only that... but also both papers got classified as orals, meaning they fall into the top 25% of papers that get the opportunity to be presented in a 5 min. talk at the conference. Congratulations to this achievement. The titles and abstracts are listed below. We will update the publication section on our website soon with the papers and supplementary material.
Steering Self-Supervised Feature Learning Beyond Local Pixel Statistics
Simon Jenni, Hailin Jin and Paolo Favaro
We introduce a novel principle for self-supervised feature learning based on the discrimination of specific transformations of an image. We argue that the generalization capability of learned features depends on what image neighborhood size is sufficient to discriminate different image transformations: The larger the required neighborhood size and the higher the order of the image statistics that the feature can describe. An accurate description of higher-order image statistics allows to better represent the shape and configuration of objects and their context, which ultimately generalizes better to new tasks such as object classification and detection. This suggests a criterion to choose and design image transformations. Based on this criterion, we introduce a novel image transformation that we call limited context inpainting (LCI). This transformation inpaints an image patch conditioned only on a small rectangular pixel boundary (the limited context). Because of the limited boundary information, the inpainter can learn to match local pixel statistics, but is unlikely to match the global statistics of the image. We claim that the same principle can be used to justify the performance of transformations such as image rotations and warping. Indeed, we demonstrate experimentally that learning to discriminate transformations such as LCI, image warping and rotations, yields features with state of the art generalization capabilities on several datasets such as Pascal VOC, STL-10, CelebA, and ImageNet. Remarkably, our trained features achieve a higher performance on Places than features trained through supervised learning with ImageNet labels.
Learning to Have an Ear for Face Super-Resolution
Givi Meishvili, Simon Jenni and Paolo Favaro
We propose a novel method to use both audio and a low-resolution image to perform extreme face super-resolution (a 16x increase of the input size). When the resolution of the input image is very low (e.g., 8x8 pixels), the loss of information is so dire that important details of the original identity have been lost and audio can aid the recovery of a plausible high-resolution image. In fact, audio carries information about facial attributes, such as gender and age. Moreover, if an audio track belongs to an identity in a known training set, such audio might even help to restore the original identity. Towards this goal, we propose a model and a training procedure to extract information about the face of a person from her audio track and to combine it with the information extracted from her low-resolution image, which relates more to pose and colors of the face. We demonstrate that the combination of these two inputs yields high-resolution images that better capture the correct attributes of the face. In particular, we show experimentally that audio can assist in recovering attributes such as the gender, the age and the identity, and thus improve the correctness of the image reconstruction process. Our procedure does not make use of human annotation and thus can be easily trained with existing video datasets. Moreover, we show that our model builds a factorized representation of images and audio as it allows one to mix low-resolution images and audio from different videos and to generate realistic faces with semantically meaningful combinations.
Google organizes a workshop to bring together key researchers from academia and Google to exchange ideas and forge new collaborations. The key theme of the workshop is computational imaging, which aims to produce visual representations of data and physical processes beyond what current imaging instruments can do today by simultaneously designing algorithms and hardware.
Prof. Paolo Favaro is invited to speak at the event and presents the latest work in his group on image deblurring from the classic model-based methods to the more recent deep learning approaches.
Our research  contributed to the discovery of important patterns in the EEG signals of coma patients. Read more about it in the official news article here.
S. Jonas, A. Rossetti, M. Oddo, S. Jenni, P. Favaro and F. Zubler, "EEG-based Outcome Prediction after Cardiac Arrest with Convolutional Neural Networks: Performance and Visualization of Discriminative Features", in Human Brain Mapping, 2019.