Figure 1: The pipeline of our geometry-aware NeRF editing framework. Leveraging the shape priors brought in by synthetic data, our method enables geometric analysis of real reconstructed models and interactive geometric editing utilizing the box abstractions as interaction handles. At the same time, the segmentation enables our method to re-combine parts from different models to form new ones.
Abstract
Neural Radiance Fields (NeRFs) have shown great potential for tasks like novel view synthesis of static 3D scenes. Since NeRFs are trained on a large number of input images, it is not trivial to change their content afterwards. Previous methods to modify NeRFs provide some control but they do not support direct shape deformation which is common for geometry representations like triangle meshes. In this paper, we present a NeRF geometry editing method that first extracts a triangle mesh representation of the geometry inside a NeRF. This mesh can be modified by any 3D modeling tool (we use ARAP mesh deformation). The mesh deformation is then extended into a volume deformation around the shape which establishes a mapping between ray queries to the deformed NeRF and the corresponding queries to the original NeRF. The basic shape editing mechanism is extended towards more powerful and more meaningful editing handles by generating box abstractions of the NeRF shapes which provide an intuitive interface to the user. By additionally assigning semantic labels, we can even identify and combine parts from different objects. We demonstrate the performance and quality of our method in a number of experiments on synthetic data as well as real captured scenes.
Paper
Comming Soon
Code
Comming Soon
Video
|
BibTex
Last updated on September, 2023. |