NeUDF: Leaning Neural Unsigned Distance Fields with Volume Rendering

 

Yu-Tao Liu1,2          Li Wang1,2          Jie Yang1,2          Weikai Chen3          Xiaoxu Meng3          Bo Yang3          Lin Gao1,2*               

 

1 Institute of Computing Technology, Chinese Academy of Sciences

 

2University of Chinese Academy of Sciences          3Tencent America

 

* Corresponding author  

Accepted by IEEE CVPR 2023  

 

 

 

 

Figure 1: We show comparisons of the input multi-view images (top), watertight surfaces (middle) reconstructed with state-of-the-art SDF-based volume rendering method NeuS, and open surfaces (bottom) reconstructed with our method. Our method is capable of reconstructing high-fidelity shapes with both open and closed surfaces from multi-view images.

 

 

Abstract

 

Multi-view shape reconstruction has achieved impressive progresses thanks to the latest advances in neural implicit rendering. However, existing methods based on signed distance function (SDF) are limited to closed surfaces, failing to reconstruct a wide range of real-world objects that contain open-surface structures. In this work, we introduce a new neural rendering framework, coded NeUDF, that can reconstruct surfaces with arbitrary topologies solely from multi-view supervision. To gain the flexibility of representing arbitrary surfaces, NeUDF leverages the unsigned distance function (UDF) as surface representation. While a naive extension of SDF-based neural renderer cannot scale to UDF, we propose two new formulations of weight function specially tailored for UDF-based volume rendering. Furthermore, to cope with open surface rendering, where the in/out test is no longer valid, we present a dedicated normal regularization strategy to resolve the surface orientation ambiguity. We extensively evaluate our method over a number of challenging datasets, including two typical open surface datasets MGN and Deep Fashion 3D. Experimental results demonstrate that NeUDF can significantly outperform the state-of-the-art methods in the task of multi-view surface reconstruction, especially for the complex shapes with open boundaries.

 

 

 

Paper

 

https://arxiv.org/abs/2304.10080

 

Code

 

https://github.com/IGLICT/NeUDF

 

 

 

Methodology

 

Our method leverages UDF as surface representation and uses an UDF-based volume rendering for network optimization. Particularly, we propose a new rendering weight function and a point sampling stratagy specially tailored for UDF to ensure both accurate rendering and sufficient regularization. We further propose a new normal regularization to address the unstable gradient near the zero set of UDF. The dedicated volume rendering scheme can achieve high-fidelity open surface reconstruction.

 

 

 

 

Figure 2: We structure an UDF-based volume rendering scheme specially tailored for open surface reconstruction. The rendering weight is accurate and sharp on the going-in side of the surface, and the sampling weight is smooth and balanced on both sides of the surface. The proposed rendering weight is specially tailored for UDF rendering which is unbiased and occlusion-aware. To cope with the rendering weight, any sampling weight that distributes balanced weight on both sides of the surface is compatible with our framework.

 

 

 

 

Figure 3: To address the unstable gradient of the UDF near the zero level set, we propose an UDF-based normal regularization which leverages vicinity information to enhance the gradient stability. We use the gradients of points (in blue) with an offset from the surface to approximate the unstable surface normal (in green) of UDF representation. The approximated gradient is more stable and reliable for robust optimization.

 

 

Results

 

 

Challenging cases

 

 

Garment cases

 

 

Real captured cases

 

 

DTU and BMVS datasets

 

Figure 4: We show the reconstruction results on cahllenging cases, garment cases, real captured cases, and samples of DTU and BMVS datasets. Our method can reconstruct high-fidelity meshes of both closed and open surfaces. For objects with complex topologies and detailed geometry, our method can also achieve rigorous reconstruction quality.

 

 

Video

 

 

 

BibTex

 

@inproceedings{Liu23NeUDF,
    author = Liu, Yu-Tao and Wang, Li and Yang, Jie and Chen, Weikai and Meng, Xiaoxu and Yang, Bo and Gao, Lin},
    title = {NeUDF: Leaning Neural Unsigned Distance Fields with Volume Rendering},
    booktitle={Computer Vision and Pattern Recognition (CVPR)},
    year = {2023},
}
640,058 Total Pageviews