All talks

Seminars and Talks

Are Large-scale Datasets Necessary for Self-Supervised Pre-training?
by Alaa El-Nouby
Date: Friday, Mar. 11
Time: 15:30
Location: Online Call via Zoom

Our guest speaker is Alaa El-Nouby from Meta AI Research and Inria Paris and you are all cordially invited to the CVG Seminar on March 11th at 3:30 p.m. on Zoom (passcode is 913674).

Abstract

Pre-training models on large scale datasets, like ImageNet, is a standard practice in computer vision. This paradigm is especially effective for tasks with small training sets, for which high-capacity models tend to overfit. In this work, we consider a self-supervised pre-training scenario that only leverages the target task data. We consider datasets, like Stanford Cars, Sketch or COCO, which are order(s) of magnitude smaller than Imagenet. Our study shows that denoising autoencoders, such as BEiT or a variant that we introduce in this paper, are more robust to the type and size of the pre-training data than popular self-supervised methods trained by comparing image embeddings. We obtain competitive performance compared to ImageNet pre-training on a variety of classification datasets, from different domains. On COCO, when pre-training solely using COCO images, the detection and instance segmentation performance surpasses the supervised ImageNet pre-training in a comparable setting.

Bio

Alaa El-Nouby is a PhD student at Meta AI Research and Inria Paris advised by Hervé Jégou and Ivan Laptev. His research interests are metric learning, self-supervised learning and transformers for computer vision. Prior to pursuing his PhD, Alaa received his Msc from the University of Guelph and the Vector institute, advised by Graham Taylor, where he conducted research in spatio-temporal representation learning and text-to-image synthesis with generative models.