Home Community SUSTech VIP Lab Proposes Track Anything Model (TAM) That Achieves High-Performance Interactive Tracking and Segmentation in Videos

SUSTech VIP Lab Proposes Track Anything Model (TAM) That Achieves High-Performance Interactive Tracking and Segmentation in Videos

SUSTech VIP Lab Proposes Track Anything Model (TAM) That Achieves High-Performance Interactive Tracking and Segmentation in Videos

Video item Tracking (VOT) is a cornerstone of computer vision research as a consequence of the importance of tracking an unknown item in unconstrained settings. Video Object Segmentation (VOS) is a method that, like VOT, seeks to discover the region of interest in a video and isolate it from the rest of the frame. One of the best video trackers/segmenters nowadays are initiated by a segmentation mask or a bounding box and are trained on large-scale manually-annotated datasets. Large amounts of labeled data, on the one hand, conceal an enormous human labor force. Also, the semi-supervised VOS requires a novel object mask ground truth for initialization under the current initialization parameters.

The Segment-Anything approach (SAM) was recently developed as a comprehensive baseline for segmenting images. Due to its adaptable prompts and real-time mask computation, it allows for interactive use. Satisfactory segmentation masks on specified image areas will be returned by SAM when given user-friendly suggestions in the shape of points, boxes, or language. Nonetheless, as a consequence of its lack of temporal consistency, researchers don’t see spectacular performance when SAM is straight away applied to videos.

Researchers from SUSTech VIP Lab introduce the Track-Anything project, creating powerful tools for video object tracking and segmentation. The Track Anything Model (TAM) has a simple interface and might track and segment any objects in a video with a single round of inference. 

🚀 JOIN the fastest ML Subreddit Community

TAM is an expansion of SAM, a large-scale segmentation model, with XMem, a state-of-the-art VOS model. Users can define a goal object by interactively initializing the SAM (i.e., clicking on the thing); next, XMem provides a mask prediction of the thing in the subsequent frame based on temporal and spatial correspondence. Finally, SAM provides a more precise mask description; users can pause and proper throughout the tracking process as soon as they notice tracking failures.

The DAVIS-2016 validation set and the DAVIS-2017 test-development set were utilized in the evaluation of TAM. Most notably, the findings show that TAM excels in difficult and sophisticated settings. TAM’s outstanding tracking and segmentation abilities inside only click initialization, and one-round inference are demonstrated by its ability to handle multi-object separation, goal deformation, size change, and camera motion well.

The proposed Track Anything Model (TAM) offers a wide selection of options for adaptive video tracking and segmentation, including but not limited to the next:

  • Quick and simple video transcription: TAM may separate regions of interest in movies and permit users to select and select which items they wish to follow. This implies it may well be used for video annotation, equivalent to tracking and segmenting video objects.
  • Prolonged commentary of an object: Since long-term tracking has many real-world uses, researchers are paying increasing attention to it. Real-world applications of TAM are more advanced since they’ll accommodate frequent shot changes in prolonged videos.
  • A video editor that is easy to make use of: The Track Anything Model allows us to divide things into categories. TAM’s object segmentation masks allow us to selectively cut out or reposition any object in a movie.
  • Kit for visualizing and developing video-related activities: The team also supplies visualized user interfaces for various video operations, including VOS, VOT, video inpainting, and more, to facilitate their use. Users can test their models on real-world footage and see the real-time outcomes with the toolbox.

Try the Paper and Github Link. Don’t forget to affix our 20k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the most recent AI research news, cool AI projects, and more. If you might have any questions regarding the above article or if we missed anything, be happy to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


” data-medium-file=”https://www.marktechpost.com/wp-content/uploads/2020/10/Tanushree-Picture-225×300.jpeg” data-large-file=”https://www.marktechpost.com/wp-content/uploads/2020/10/Tanushree-Picture-768×1024.jpeg”>

Tanushree Shenwai is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Bhubaneswar. She is a Data Science enthusiast and has a keen interest within the scope of application of artificial intelligence in various fields. She is enthusiastic about exploring the brand new advancements in technologies and their real-life application.


Please enter your comment!
Please enter your name here