SPARSE2DGS: SPARSE-VIEW SURFACE RECONSTRUCTION USING 2D GAUSSIAN SPLATTING WITH DENSE POINT CLOUD

Natsuki Takama1, Shintaro Ito1, Koich Ito1, Hwann-Tzong Chen2, Takafumi Aoki1
1Graduate School of Information Sciences, Tohoku University
2Department of Computer Science, National Tsing Hua University, Taiwan
ICIP 2025

Abstract

Gaussian Splatting (GS) has gained attention as a fast and effective method for novel view synthesis. It has also been applied to 3D reconstruction using multi-view images and can achieve fast and accurate 3D reconstruction. However, GS assumes that the input contains a large number of multi-view images, and therefore, the reconstruction accuracy significantly decreases when only a limited number of input images are available. One of the main reasons is the in- sufficient number of 3D points in the sparse point cloud obtained through Structure from Motion (SfM), which results in a poor initialization for optimizing the Gaussian primitives. We propose a new 3D reconstruction method, called Sparse2DGS, to enhance 2DGS in reconstructing objects using only three images. Sparse2DGS employs DUSt3R, a fundamental model for stereo images, along with COLMAP MVS to generate highly accurate and dense 3D point clouds, which are then used to initialize 2D Gaussians. Through experiments on the DTU dataset, we show that Sparse2DGS can ac- curately reconstruct the 3D shapes of objects using just three images.

Method

We propose a 3D reconstruction method only from three input images, which employs 2DGS with DUSt3R and COLMAP MVS. Figure. 1 shows an overview of the proposed method. The proposed method takes three images and their camera parameters as input. First, the 3D point clouds are reconstructed from the inputs using COLMAP and DUSt3R, respectively. Next, the 3D point clouds are integrated after removing outliers to obtain a dense and accurate 3D point cloud. Then, the radiance fields are optimized using the 3D point cloud as the initial values of 2D Gaussians. Finally, the depth map for each viewpoint is rendered using the optimized radiance fields and the object surface is reconstructed by integrating them.

Overview of SPARSE2DGS

Quantitative Results

In this experiment, we use the DTU dataset, which is a multi-view image dataset for 3D reconstruction. The mesh model reconstructed by each method is evaluated based on the distance between the mesh model and the ground-truth 3D point cloud. Chamfer Distance (CD) is used as the evaluation metric.

Quantitative Results

Table 1: Quantitative results of Chamfer Distance (CD) ↓ on DTU dataset. Red, orange, and yellow cells indicate the best, second best, and third best performance, respectively.

Quantitative Results

Quantitative Results

Figure 1: Examples of the reconstructed mesh models by each method.

BibTeX


        @article{takama2025sparse2dgs,
          title   = "Sparse2DGS: Sparse-View Surface Reconstruction using 2D Gaussian Splatting with Dense Point Cloud",
          author  = "Takama, Natsuki and Ito, Shintaro and Ito, Koichi and Chen, Hwann-Tzong and Aoki, Takafumi",
          journal = "ICIP",
          year    = "2025",
        }