News

Master Projects at the ARTORG Center and Prophesee
Nov. 30, 2021

Multiple Master thesis projects are offered at the ARTORG Center and Prophesee.


The Gerontechnology and Rehabilitation group at the ARTORG Center for Biomedical Engineering is offering multiple MSc thesis projects to students, which are interested in working with real patient data, artificial intelligence and machine learning algorithms. The goal of these projects is to transfer the findings to the clinic in order to solve today’s healthcare problems and thus to improve the quality of life of patients.

  • Assessment of Digital Biomarkers at Home by Radar. [PDF]
  • Comparison of Radar, Seismograph and Ballistocardiography and to Monitor Sleep at Home. [PDF]
  • Sentimental Analysis in Speech. [PDF]

Contact: Dr. Stephan Gerber (stephan.gerber@artorg.unibe.ch)


A 6 month intership at Prophesee, Grenoble is offered to a talented Master Student.

The topic of the internship is working on burst imaging following the work of Sam Hasinoff, and exploring ways to improve it using event-based vision.

A compensation to cover the expenses of living in Grenoble is offered. Only students that have legal rights to work in France can apply.

Anyone interested can send an email with the CV to Daniele Perrone (dperrone@prophesee.ai).


More thesis topics from the Computer Vision Group can be found here.

Latest Publications in BMVC, ICCV and ICML
Nov. 30, 2021

We have new research papers published in top ML and CV conferences!

 


Learning to Deblur and Rotate Motion-Blurred Faces

Givi Meishvili, Attila Szabo, Simon Jenni and Paolo Favaro, in British Machine Vision Conference (BMVC), 2021.

We propose a solution to the novel task of rendering sharp videos from new view-points from a single motion-blurred image of a face. Our method handles the complexity of face blur by implicitly learning the geometry and motion of faces through the joint training on three large datasets: FFHQ and 300VW, which are publicly available, and a new multi-view face dataset that we built, which will be made available upon publication. The first two datasets provide a large variety of faces and allow our model to generalize better. The third dataset instead allows us to introduce multi-view constraints, which are crucial to synthesizing sharp videos from a new camera view. Our dataset consists of high frame rate synchronized videos from multiple views of several subjects displaying a wide range of facial expressions. We use the high frame rate videos to simulate real-istic motion blur through averaging. Thanks to this dataset, we train a neural network to reconstruct a 3D video representation from a single image and the corresponding face gaze. We then provide a camera viewpoint relative to the estimated gaze and the blurry image as input to an encoder-decoder network to generate a video of sharp frames with anovel camera viewpoint. We demonstrate our approach on test subjects of our multi-view dataset and VIDTIMIT.

Paper: https://www.bmvc2021-virtualconference.com/assets/papers/1043.pdf

 

ISD: Self-Supervised Learning by Iterative Similarity Distillation

Ajinkya Tejankar*, Soroush Abbasi Koohpayegani*, Vipin Pillai, Paolo Favaro, Hamed Pirsiavash, in International Conference on Computer Vision (ICCV), 2021.

Recently, contrastive learning has achieved great results in self-supervised learning, where the main idea is to pull two augmentations of an image (positive pairs) closer compared to other random images (negative pairs). We argue that not all negative images are equally negative. Hence, we introduce a self-supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs. We iteratively distill a slowly evolving teacher model to the student model by capturing the similarity of a query image to some random images and transferring that knowledge to the student. Specifically, our method should handle unbalanced and unlabeled data better than existing contrastive learning methods, because the randomly chosen negative set might include many samples that are semantically similar to the query image. In this case, our method labels them as highly similar while standard contrastive methods label them as negatives. Our method achieves comparable results to the state-of-the-art models. Our code is available here: https://github.com/UMBCvision/ISD.

Paper: https://www.cvg.unibe.ch/media/publications/pdf/ISD_iccv21.pdf

 

A Uniļ¬ed Generative Adversarial Network Training via Self-Labeling and Self-Attention

Tomoki Watanabe and Paolo Favaro, in International Conference on Machine Learning (ICML), 2021.

We propose a novel GAN training scheme that can handle any level of labeling in a unified manner. Our scheme introduces a form of artificial labeling that can incorporate manually defined labels, when available, and induce an alignment between them. To define the artificial labels, we exploit the assumption that neural network generators can be trained more easily to map nearby latent vectors to data with semantic similarities, than across separate categories. We use generated data samples and their corresponding artificial conditioning labels to train a classifier. The classifier is then used to self-label real data. To boost the accuracy of the self-labeling, we also use the exponential moving average of the classifier. However, because the classifier might still make mistakes, especially at the beginning of the training, we also refine the labels through self-attention, by using the labeling of real data samples only when the classifier outputs a high classification probability score. We evaluate our approach on CIFAR-10, STL-10 and SVHN, and show that both self-labeling and self-attention consistently improve the quality of generated data. More surprisingly, we find that the proposed scheme can even outperform classconditional GANs.

Paper: https://arxiv.org/pdf/2106.09914.pdf

Bachelor or Master Projects at the ARTORG Center
Jan. 27, 2021

The Gerontechnology and Rehabilitation group at the ARTORG Center for Biomedical Engineering is offering multiple BSc- and MSc thesis projects to students, which are interested in working with real patient data, artificial intelligence and machine learning algorithms. The goal of these projects is to transfer the findings to the clinic in order to solve today’s healthcare problems and thus to improve the quality of life of patients.

  • Machine Learning Based Gait-Parameter Extraction by Using Simple Rangefinder Technology. [PDF]
  • Speech recognition in speech and language therapy [PDF]
  • Home-Monitoring of Elderly by Radar [PDF]
  • Gait feature detection in Parkinson's Disease [PDF]
  • Development of an arthroscopic training device using virtual reality [PDF]

Contact: Dr. Stephan Gerber (stefan.gerber@artorg.unibe.ch), Michael Single (michael.single@artorg.unibe.ch)

 

More thesis topics from the Computer Vision Group can be found here.

Latest Publications in GCPR and ACCV
Sept. 21, 2020

The pandemic can't stop us! We have two new research papers published in GCPR and ACCV. Both were selected to be oral presentations.

 

Boosting Generalization in Bio-Signal Classification by Learning the Phase-Amplitude Coupling

Abdelhak Lemkhenter and Paolo Favaro, in German Conference on Pattern Recognition (GCPR), 2020.

In this work, we introduce Phase-Swap, a novel self-supervised learning task for bio-signals. Most hand-crafted features for bio-signals in general, and EEG in particular, are derived from the power-spetrum, e.g. considering the energy of the signal within predefined frequency bands. In fact, most often the phase information is discared as it is more sample specific, and thus more sensitive to noise, than the amplitude. However, various medical studies have shown the link between the phase component and various physiological patterns such as cognitive functions in the case of brain activity. Motivated by this line of research, we build a self-supervised task that encourages the trained models to learn the implicit phase-amplitude coupling. This task, named Phase Swap, consists of discriminating between real samples and samples for which the phase component in the fourrier domain was swapped out by one taken from another sample. We show that the learned self-supervised features generalize better across experimental settings and subject identities compared to a supervised baseline for two classification tasks, seizure
detection and sleep scoring, on four different dataset: ExpandedEDF (Sleep Cassette + Sleep Telemetry), CHB-MIT and the ISRUC-Sleep data set. These findings highlight the benefits our self-supervised pretraining for various machine learning applications for bio-signals.

Pre-print: https://arxiv.org/pdf/2009.07664

 
Self-Supervised Multi-View Synchronization Learning for 3D Pose Estimation

Simon Jenni and Paolo Favaro, in Asian Conference on Computer Vision (ACCV), 2020.

Current state of the art methods cast monocular 3D human pose estimation as a learning problem by training neural networks on costly large data sets of images and corresponding skeleton poses. In contrast, we propose an approach that can exploit small annotated data sets by fine-tuning networks pre-trained via self-supervised learning on (large) unlabeled data sets. To drive such models in the pre-training step towards supporting 3D pose estimation, we introduce a novel self-supervised feature learning task designed to focus on the 3D structure in an image. We exploit images extracted from videos captured with a multi-view camera system. The task is to classify whether two images depict two views of the same scene up to a rigid transformation. In a multi-view data set, where objects deform in a non rigid manner, a rigid transformation occurs only between two views taken at the exact same time, i.e., when they are synchronized. We demonstrate the effectiveness of the synchronization task on the Human3.6M data set and achieve state of the art results in 3D human pose estimation.

CVPR 2020: Our Papers are Orals!
March 16, 2020

We hit the bullseye again, twice this time. The two papers by Givi Meishvili and Simon Jenni got accepted to one of the largest, top-tier conferences in the field of Computer Vision: CVPR. Not only that... but also both papers got classified as orals, meaning they fall into the top 25% of papers that get the opportunity to be presented in a 5 min. talk at the conference. Congratulations to this achievement. The titles and abstracts are listed below. We will update the publication section on our website soon with the papers and supplementary material.

 

Steering Self-Supervised Feature Learning Beyond Local Pixel Statistics

Simon Jenni, Hailin Jin and Paolo Favaro


We introduce a novel principle for self-supervised feature learning based on the discrimination of specific transformations of an image. We argue that the generalization capability of learned features depends on what image neighborhood size is sufficient to discriminate different image transformations: The larger the required neighborhood size and the higher the order of the image statistics that the feature can describe. An accurate description of higher-order image statistics allows to better represent the shape and configuration of objects and their context, which ultimately generalizes better to new tasks such as object classification and detection. This suggests a criterion to choose and design image transformations. Based on this criterion, we introduce a novel image transformation that we call limited context inpainting (LCI). This transformation inpaints an image patch conditioned only on a small rectangular pixel boundary (the limited context). Because of the limited boundary information, the inpainter can learn to match local pixel statistics, but is unlikely to match the global statistics of the image. We claim that the same principle can be used to justify the performance of transformations such as image rotations and warping. Indeed, we demonstrate experimentally that learning to discriminate transformations such as LCI, image warping and rotations, yields features with state of the art generalization capabilities on several datasets such as Pascal VOC, STL-10, CelebA, and ImageNet. Remarkably, our trained features achieve a higher performance on Places than features trained through supervised learning with ImageNet labels.


Learning to Have an Ear for Face Super-Resolution

Givi Meishvili, Simon Jenni and Paolo Favaro


We propose a novel method to use both audio and a low-resolution image to perform extreme face super-resolution (a 16x increase of the input size). When the resolution of the input image is very low (e.g., 8x8 pixels), the loss of information is so dire that important details of the original identity have been lost and audio can aid the recovery of a plausible high-resolution image. In fact, audio carries information about facial attributes, such as gender and age. Moreover, if an audio track belongs to an identity in a known training set, such audio might even help to restore the original identity. Towards this goal, we propose a model and a training procedure to extract information about the face of a person from her audio track and to combine it with the information extracted from her low-resolution image, which relates more to pose and colors of the face. We demonstrate that the combination of these two inputs yields high-resolution images that better capture the correct attributes of the face. In particular, we show experimentally that audio can assist in recovering attributes such as the gender, the age and the identity, and thus improve the correctness of the image reconstruction process. Our procedure does not make use of human annotation and thus can be easily trained with existing video datasets. Moreover, we show that our model builds a factorized representation of images and audio as it allows one to mix low-resolution images and audio from different videos and to generate realistic faces with semantically meaningful combinations.