|
Authors
|
G. A. Cheimariotis |
| A. Karakottas | |
| E. Chatzis | |
| A. Kanlis | |
| D. Zarpalas | |
|
Year
|
2025 |
|
Venue
|
21st International Conference on Computer Analysis of Images and Patterns (CAIP) |
|
Download
|
|
Data valuation and monetization are becoming increasingly important across domains such as eXtended Reality (XR) and digital media. In the context of 3D scene reconstruction from a set of images -whether casually or professionally captured - not all inputs contribute equally to the final output. Neural Radiance Fields (NeRFs) enable photorealistic 3D reconstruction of scenes by optimizing a volumetric radiance field given a set of images. However, in-the-wild scenes often include image captures of varying quality, occlusions, and transient objects, resulting in uneven utility across inputs. In this paper we propose a method to quantify the individual contribution of each image to NeRF-based reconstructions of in-the-wild image sets. Contribution is assessed through reconstruction quality metrics based on PSNR and MSE. We validate our approach by removing low-contributing images during training and measuring the resulting impact on reconstruction fidelity.