Lagrangian Hashing for Compressed Neural Field Representations
Shrisudhan Govindarajan*1, Zeno Sambugaro*2, Ahan Shabanov1,
Towaki Takikawa3, Daniel Rebain4, Weiwei Sun4, Nicola Conci2,
Kwang Moo Yi4, Andrea Tagliasacchi1,3,5
1Simon Fraser University, 2University of Trento, 3University of Toronto,
4University of British Columbia, 5Google DeepMind







TL;DR: A memory efficient hybrid representation that is simultaneously Eulerian(grids) and Lagrangian(points), which realizes memory-efficient novel view synthesis.




Abstract

We present Lagrangian Hashing, a representation for neural fields combining the characteristics of fast training NeRF methods that rely on Eulerian grids (i.e.~InstantNGP), with those that employ points equipped with features as a way to represent information (e.g. 3D Gaussian Splatting or PointNeRF). We achieve this by incorporating a point-based representation into the high-resolution layers of the hierarchical hash tables of an InstantNGP representation. As our points are equipped with a field of influence, our representation can be interpreted as a mixture of Gaussians stored within the hash table. We propose a loss that encourages the movement of our Gaussians towards regions that require more representation budget to be sufficiently well represented. Our main finding is that our representation allows the reconstruction of signals using a more compact representation without compromising quality.



Method

Method Overview

Overview: (1) Hashing of voxel vertices: For any given input coordinate \(x_i\), our method identifies surrounding voxels across \(L\) Levels of detail (LoDs) (Only one Lod is showed for convenience). Indices are then assigned to the vertices of these voxels, through hashing procedure. (2) Lookup to buckets: for all resulting corner indices, we look up the corresponding \(B\) buckets, containing \(K\) feature vector and their corresponding \(\mu_{k}\) position. (3) Gaussian interpolation: We compute Gaussian weights with respect to the input position for every feature vector in the bucket. (4) Feature aggregation: We multiply the Gaussian weights for the feature corresponding to the feature vector and aggregate them from every level of detail. (5) Neural Network: the resulting concatenated features are mapped to the input domain by the Neural Network.



Pareto plot: PSNR vs # params

Pareto plot

Pareto plot: Tanks and temples(left), Gigapixel images(right). We demonstrate that our method consistently outperforms InstantNGP in terms of quality vs number of parameters.



Comparisons(Ours vs InstantNGP)

We compare LagHash(ours) with baseline InstantNGP. We show that our method achieve competitive performance with baseline utilizing fewer parameters.


Results on NeRF synthetic dataset.
Lagrangian rep(final LoD) LagHash (Ours) InstantNGP
6.68M params 12.10M params



Lagrangian representation

We present interactive visualizations of point clouds used in the paper. Use mouse to rotate the point cloud: scroll to zoom in/zoom-out, Shift+Left Mouse Button to move the camera, Left Mouse Button to rotate the camera.


Tanks & Temples lagrangian representation.
Caterpillar Truck
Multi-scale lagrangian representation.
Lagrangian rep(pre-final LoD) Lagrangian rep(final LoD) LagHash (Ours)



BibTeX

                    @inproceedings{govindarajan2024laghashes,
        author    = {Govindarajan, Shrisudhan and Sambugaro, Zeno and Shabhanov, Ahan and Takikawa, Towaki and 
                     Sun, Weiweiand Rebain, Daniel and Conci, Nicola and  Yi, Kwang Moo and 
                     Tagliasacchi, Andrea},
        title     = {Lagrangian Hashing for Compressed Neural Field Representations},
        booktitle = {ECCV},
        year      = {2024},
        }
                


Acknowledgements

This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant, NSERC Collaborative Research and Development Grant, Google DeepMind, Digital Research Alliance of Canada, the Advanced Research Computing at the University of British Columbia, Microsoft Azure, and the SFU Visual Computing Research Chair program.



This template was borrowed from Colorful Image Colorization and Canonical Capsules.