Skip to main content

Aleksandra Chuchvara: Efficient algorithms towards interactive light-field applications

Tampere University
LocationKorkeakoulunkatu 5, Tampere
Hervanta campus, Rakennustalo, auditorium 5G202
Date22.11.2024 12.00–16.00 (UTC+2)
LanguageEnglish
Entrance feeFree of charge
In her doctoral dissertation, MSc Aleksandra Chuchvara presents novel methods for depth estimation in the context of light fields, focusing on the resource efficiency and high performance of the respective methods, thus further advancing the technology for its widespread adoption in the next generation 3D imaging systems.

Imagine looking at a photograph: not a flat one, but an immersive one, allowing you to see around from different angles and at different depths. This is what the light field technology does. It employs arrays of cameras and sensors to gather detailed information about the light rays in space: their color, brightness, and direction. Displaying light field creates a truly immersive and realistic visual experience. However, the sheer amount of light-field data raises new technical challenges that require the integration of advanced hardware architectures with sophisticated software algorithms for their effective solution.

MSc Aleksandra Chuchvara addressed in her dissertation the problem of efficient depth estimation from sparse light fields captured with wide-baseline camera arrays. Depth estimation from multiple views is a long-standing challenge and a crucial step in many light-field systems as it forms the foundation for essential use-cases, such as 3D scene reconstruction and rendering. For that, depth estimation methods that offer a better balance between computational time and reconstruction accuracy are especially in demand. 

The dissertation consists of a collection of three IEEE papers: two of them are published in IEEE journal and one in IEEE conference. The main discussion provides the essential ideas around the use of sparse superpixel-based image representation in combination with classical stereo matching. Estimating depth in terms of a sparse superpixel-based representation significantly reduces the problem size, while at the same time facilitates improved reconstruction accuracy. The proposed method, thus, strikes a favourable time-accuracy trade-off that is different from previous work in light-field depth estimation.

As a second highlight, Chuchvara’s dissertation presents a novel method that adaptively transforms an image before superpixel segmentation in order to improve the image segmentation independently of the underlying segmentation method. This aims to be used as a generic content-adaptation framework for existing superpixel segmentation algorithms. In turn, it improves the accuracy of the above-mentioned depth reconstruction method.

“Given that camera arrays are one of the most efficient ways of capturing the diversity of real-world scenes as light fields with high resolution in the spatial and temporal dimensions, solving the posed problem is meaningful and helpful for numerous practical applications of light-field technology,” Chuchvara says. 

“I believe the solutions proposed in my dissertation can contribute to the successful adoption of light-field systems in future live 3D video streaming, enabling high-end-use cases, for example, 3D video-conferencing and cinematic 3D streaming and visualisation”, she continues.

Public defence on 22 November 2024

The doctoral dissertation of MSc Aleksandra Chuchvara titled Methods for Fast and Accurate Depth Estimation from Sparse Light Fields will be publicly examined in the Faculty of Information Technology and Communication Sciences at Tampere University at 12 o’clock on Friday 22 November 2024 in auditorium RG202 of Rakennustalo building (Korkeakoulunkatu 5, Tampere). The Opponents will be Professor Jörn Ostermann from Leibniz Universität Hannover, Germany, and Professor A. Aydın Alatan from from Middle East Technical University, Turkey. The Custos will be Professor Atanas Gotchev from Tampere University. 

 

The dissertation is available online.