All news

News

Latest publications in ICCV 2023
Aug. 21, 2023

Two papers from our goup got accepted to ICCV 2023!

 


Efficient Video Prediction via Sparsely Conditioned Flow Matching

Aram Davtyan*, Sepehr Sameni* and Paolo Favaro, in IEEE International Conference on Computer Vision (ICCV 2023)

We introduce a novel generative model for video prediction based on latent flow matching, an efficient alternative to diffusion-based models. In contrast to prior work, we keep the high costs of modeling the past during training and inference at bay by conditioning only on a small random set of past frames at each integration step of the image generation process. Moreover, to enable the generation of high-resolution videos and to speed up the training, we work in the latent space of a pretrained VQGAN. Finally, we propose to approximate the initial condition of the flow ODE with the previous noisy frame. This allows to reduce the number of integration steps and hence, speed up the sampling at inference time. We call our model Random frame conditioned flow Integration for VidEo pRedition, or, in short, RIVER. We show that RIVER achieves superior or on par performance compared to prior work on common video prediction benchmarks, while requiring an order of magnitude fewer computational resources.

Project website: https://araachie.github.io/river.

Paper: https://arxiv.org/abs/2211.14575

 


Spatio-Temporal Crop Aggregation for Video Representation Learning

Sepehr Sameni, Simon Jenni and Paolo Favaro, in IEEE International Conference on Computer Vision (ICCV 2023)

We propose Spatio-temporal Crop Aggregation for video representation LEarning (SCALE), a novel method that enjoys high scalability at both training and inference time. Our model builds long-range video features by learning from sets of video clip-level features extracted with a pretrained backbone. To train the model, we propose a self-supervised objective consisting of masked clip feature predictions. We apply sparsity to both the input, by extracting a random set of video clips, and to the loss function, by only reconstructing the sparse inputs. Moreover, we use dimensionality reduction by working in the latent space of a pre-trained backbone applied to single video clips. These techniques make our method not only extremely efficient to train but also highly effective in transfer learning. We demonstrate that our video representation yields state-of-the-art performance with linear, nonlinear, and k-NN probing on common action classification and video understanding datasets.

Paper: https://arxiv.org/abs/2211.17042