Home Community Google AI Introduces SANPO: A Multi-Attribute Video Dataset for Outdoor Human Egocentric Scene Understanding

Google AI Introduces SANPO: A Multi-Attribute Video Dataset for Outdoor Human Egocentric Scene Understanding

0
Google AI Introduces SANPO: A Multi-Attribute Video Dataset for Outdoor Human Egocentric Scene Understanding

For tasks like self-driving, the AI model must understand not only the 3D structure of the roads and sidewalks but in addition discover and recognize street signs and stop lights. This task is made easier with a special laser mounted on the automobile that captures the 3D data. Such a process is named egocentric scene understanding, i.e., comprehending the environment from one’s own perspective. The issue is that there aren’t publicly available datasets beyond the autonomous driving domain that generalize to egocentric human scene understanding.

Researchers at Google have introduced SANPO (Scene understanding, Accessibility, Navigation, Pathfinding, Obstacle avoidance) dataset, which is a multi-attribute video dataset for human egocentric scene understanding. SANPO consists of each real-world in addition to synthetic data, called SANPO-Real and SANPO-Synthetic, respectively. SANPO-Real covers diverse environments and has videos from two stereo cameras to support multi-view methods. The true dataset also includes 11.4 hours of video captured at 15 frames per second (FPS) with dense annotations. 

SANPO is a large-scale video dataset for human egocentric scene understanding, consisting of greater than 600K real-world and greater than 100K synthetic frames with dense prediction annotations.

Google’s researchers have prioritized privacy protection. They’ve collected data while following the laws on the local, city, and state levels. They’ve also made sure to remove any personal information, like faces and vehicle license plates, before sending the info for annotation.

To beat the imperfections while capturing videos, similar to motion blur, human rating mistakes, etc., SANPO-Synthetic was introduced to enhance the true dataset. The researchers partnered with to create a high-quality synthetic dataset optimized to match real-world conditions. SANPO-Synthetic consists of 1961 sessions, which were recorded using virtualized Zed cameras having a fair split between head-mounted and chest-mounted positions.

The synthetic dataset and an element of the true dataset have been annotated using panoptic instance masks, which assigns a category and an ID to every pixel. In SANPO-Real, only just a few frames have greater than 20 instances per frame. Quite the opposite, SANPO-Synthetic features many more instances per frame than the true dataset.

A number of the other essential video datasets on this field are  SCAND, MuSoHu, Ego4D, VIPSeg, and Waymo Open. SANPO was in comparison with these datasets, and it’s the primary dataset with panoptic masks, depth, camera pose, multi-view stereo, and each real and artificial data. Other than SANPO, only Waymo Open has each panoptic segmentation and depth maps.

The researchers trained two state-of-the-art models – BinsFormer (for depth estimation) and kMaX-DeepLab (for panoptic segmentation), on the SANPO dataset. They observed that the dataset is sort of difficult for each the dense prediction tasks. Furthermore, the synthetic dataset has higher accuracy than the true dataset. This is especially because real-world environments are quite complex in comparison with synthetic data. Moreover, segmentation annotators are more precise within the case of synthetic data.

Introduced to tackle the dearth of datasets for human egocentric scene understanding, SANPO is a big advancement that encompasses each real-world and artificial datasets. Its dense annotations, multi-attribute features, and unique combination of panoptic segmentation and depth information set it aside from other datasets in the sphere. Moreover, the researchers’ commitment to privacy allows the dataset to support fellow researchers in creating visual navigation systems for the visually impaired and push the boundaries of advanced visual scene understanding.


Try the Paper and Google Blog. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to hitch our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more.

Should you like our work, you’ll love our newsletter..

We’re also on WhatsApp. Join our AI Channel on Whatsapp..


Arham Islam

” data-medium-file=”https://www.marktechpost.com/wp-content/uploads/2022/10/Screen-Shot-2022-10-03-at-10.48.33-PM-293×300.png” data-large-file=”https://www.marktechpost.com/wp-content/uploads/2022/10/Screen-Shot-2022-10-03-at-10.48.33-PM.png”>

I’m a Civil Engineering Graduate (2022) from Jamia Millia Islamia, Latest Delhi, and I even have a keen interest in Data Science, especially Neural Networks and their application in various areas.


▶️ Now Watch AI Research Updates On Our Youtube Channel [Watch Now]

LEAVE A REPLY

Please enter your comment!
Please enter your name here