Radiance fields have emerged as powerful tools for 3D scene reconstruction. However, casual capture remains challenging due to the narrow field of view of perspective cameras, which limits viewpoint coverage and feature correspondences necessary for reliable camera calibration and reconstruction. While commercially available 360° cameras offer significantly broader coverage than perspective cameras for the same capture effort, existing 360° reconstruction methods require special capture protocols and pre-processing steps that undermine the promise of radiance fields: effortless workflows to capture and reconstruct 3D scenes. We propose a practical pipeline for reconstructing 3D scenes directly from raw 360° camera captures. Our pipeline requires no special capture protocols or pre-processing, and exhibits robustness to a prevalent source of reconstruction errors: the human operator that is visible in all 360° imagery. To facilitate evaluation, we introduce a multi-tiered dataset of scenes captured as raw dual-fisheye images, establishing a benchmark for robust casual 360° reconstruction. Our method significantly outperforms not only vanilla 3DGS for 360° cameras but also robust perspective baselines when perspective cameras are simulated from the same capture, demonstrating the advantages of 360° capture for casual reconstruction.
360° cameras are convenient ways of performing casual scene capture. Using a dual-fisheye 360° camera model
(
-
)
allows much higher reconstruction quality than using perspective cameras
(
-
),
or using a fisheye camera with a perspective model
(
-
).
-
-
-
-
-
-
Conventional undistortion-based utilization of 360 scene capture degrades performance, both with robust (SLS) and without robust reconstruction (3DGS).
We also remove the camera operator, which provides clearer reconstruction quality. This is especially important for a casual capture setup with a 360° camera, as the operator is always in view.