You can find a short introduction of myself here.

I am currently working as a Ph.D. student at the French Institute for Research in Computer Science and Automation (Inria) in Rennes, France. The title of my research topic is “Sensing and Reconstruction of Plenoptic Point Clouds”. Thus, this means that my focus lies on finding methods to better capture, represent and compress a special kind of point cloud – a plenoptic one.

Training

One important aspect of the PLENOPTIMA project is the exchange of knowledge between the organizations that are part of the network. Therefore, although my host institution is Inria, in Rennes, France – which is where I am currently residing -, there are planned secondments for both academic and industrial partners. For the academic one, I will go to Tampere University, in Tampere, Finland, at the start of 2023. In the industrial case, a secondment to the Sandvik Group, also located in Finland, is expected.

Summary of Research Topic

The recent growth of interest in three-dimensional (3D) scene representation has been directly linked with the development of technologies that simplify the capture process for immersive content. Some examples are augmented/virtual reality and 3D telepresence. In this context, a point cloud (PC) representation of the scene has been favored over other explicit scene approaches. This is due to its ease of representation. In particular, plenoptic point clouds (PPC) provide a more realistic representation by attempting to represent the plenoptic function. This is achieved by making their color attributes view-dependent, which is represented by the usage of multiple RGB triplets – one for each view at the time of capture.

However, this current state-of-the-art presents limitations. You can learn more about it in the video below:

Current Progress

At this current stage, I am working on providing a compression-based comparison between explicit scene representations – such as PPC –, and implicit ones – NeRF, for instance. In order to perform this, we use the same pipeline as NeRF for capturing and rendering, but with an explicit 3D representation instead.

With that in mind, one topic I will look into is then leveraging the advantages of using neural networks to represent 3D scenes, in order to make a “bridge” between both approaches. This would help address some of the current limitations involving the representation with PPCs.