Neural graphics primitives (NGP) are promising in enabling the graceful integration of old and recent assets across various applications. They represent images, shapes, volumetric and spatial-directional data, aiding in novel view synthesis (NeRFs), generative modeling, light caching, and various other applications. Notably successful are the primitives representing data through a feature grid containing trained latent embeddings, subsequently decoded by a multi-layer perceptron (MLP).
Researchers at NVIDIA and the University of Toronto propose Compact NGP, a machine-learning framework that merges the speed related to hash tables and the efficiency of index learning by utilizing the latter for collision detection through learned probing methods. This mix is achieved by unifying all feature grids right into a shared framework where they function as indexing functions mapping right into a table of feature vectors.
Compact NGP has been specifically crafted with content distribution in focus, aiming to amortize compression overhead. Its design ensures decoding on user equipment stays low-cost, low-power, and multi-scale, enabling graceful degradation in bandwidth-constrained environments.
These data structures might be amalgamated in progressive ways through basic arithmetic combos of their indices, leading to cutting-edge compression versus quality trade-offs. In mathematical terms, these arithmetic combos involve assigning the several data structures to subsets of the bits throughout the indexing function, significantly reducing the associated fee of learned indexing, which otherwise scales exponentially with the variety of bits.
Their approach inherits the speed benefits of hash tables while achieving significantly improved compression, approaching levels comparable to JPEG in image representation. It retains differentiability and doesn’t depend on a dedicated decompression scheme like an entropy code. Compact NGP demonstrates versatility across various user-controllable compression rates and offers streaming capabilities, allowing partial results to be loaded, especially in bandwidth-limited environments.
They conducted an evaluation of NeRF compression on each real-world and artificial scenes, comparing it with several contemporary NeRF compression techniques based on TensoRF. Specifically, they employed masked wavelets as a sturdy and up to date baseline for the real-world scene. Across each scenes, Compact NGP demonstrates superior performance in comparison with Fast NGP in regards to the trade-off between quality and size.
Compact NGP’s design has been tailored to real-world applications where random access decompression, level of detail streaming, and high performance play pivotal roles, each within the training and inference stages. Consequently, there may be an eagerness to explore its potential applications in various domains resembling streaming applications, video game texture compression, live training, and diverse other areas.
Try the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to affix our 35k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, LinkedIn Group, and Email Newsletter, where we share the most recent AI research news, cool AI projects, and more.
For those who like our work, you’ll love our newsletter..
Arshad is an intern at MarktechPost. He’s currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the elemental level results in recent discoveries which result in advancement in technology. He’s enthusiastic about understanding the character fundamentally with the assistance of tools like mathematical models, ML models and AI.