Date: | Thursday, Sep. 12 |
---|---|
Time: | 09:00 |
Location: | CVG headquarters |
Luca Rolshoven will present his BSc thesis on Thursday.
This presentation will focus on my bachelor thesis and the topic of facial expression recognition. While researchers achieve good results on images that were taken in laboratories under the same or at least similar conditions, the performance on more arbitrary images with different head poses and illumination is still quite poor. My presentation will focus on the latter setting and will talk about available datasets, challenges and the work that has been done so far. Moreover, I will present my model STN-COV, which is a slightly modified version of a popular neural network architecture. With this model, I was able to achieve comparable results to the current state of the art.
Date: | Thursday, Sep. 5 |
---|---|
Time: | 10:00 |
Location: | CVG, 2nd floor, Neubrückstrasse 10 |
Collaborator Nicolas Deperrois from the Computational Neuroscience Group will give a short talk about his research tomorrow, Sep. 5, 10:00 at CVG headquarters.
Date: | Wednesday, Aug. 28 |
---|---|
Time: | 10:00 |
Location: | Room 302, Neubrückstrasse 10 |
The First-Person (Egocentric) Vision paradigm allows to acquire images of the world from the user’s perspective. Compared to standard Third-Person Vision, FPV is advantageous for building intelligent wearable systems able to assist the user and augment their abilities. Given their intrinsic mobility and the ability to acquire user-related information, FPV systems need to deal with a continuously evolving environment. This poses many challenges such as localization, context and intent understanding which need to be addressed in order to deliver effective solutions.
In this talk, I will present recent research on First-Person Vision which has been carried out in the Image Processing Laboratory (IPLAB) at the University of Catania. I will first focus on some applications relying on localization and context understanding to infer the user’s behavior. Then, I’ll present EPIC-KITCHENS, a large-scale dataset for object-interaction understanding from First-Person Vision data and recent work on the task of egocentric action anticipation.
References: