Home Community Can Video Segmentation Be More Cost-Effective? Meet DEVA: A Decoupled Video Segmentation Approach that Saves on Annotations and Generalizes Across Tasks

Can Video Segmentation Be More Cost-Effective? Meet DEVA: A Decoupled Video Segmentation Approach that Saves on Annotations and Generalizes Across Tasks

0
Can Video Segmentation Be More Cost-Effective? Meet DEVA: A Decoupled Video Segmentation Approach that Saves on Annotations and Generalizes Across Tasks

Have you ever ever wondered how surveillance systems work and the way we will discover individuals or vehicles using just videos? Or how is an orca identified using underwater documentaries? Or perhaps live sports evaluation? All this is completed via video segmentation. Video segmentation is the technique of partitioning videos into multiple regions based on certain characteristics, reminiscent of object boundaries, motion, color, texture, or other visual features. The essential idea is to discover and separate different objects from the background and temporal events in a video and to offer a more detailed and structured representation of the visual content.

Expanding the usage of algorithms for video segmentation will be costly since it requires labeling lots of data. To make it easier to trace objects in videos while not having to coach the algorithm for every specific task, researchers have give you a decoupled video segmentation DEVA. DEVA involves two fundamental parts: one which’s specialized for every task to seek out objects in individual frames and one other part that helps connect the dots over time, no matter what the objects are. This manner, will be more flexible and adaptable for various video segmentation tasks without the necessity for extensive training data.

With this design, we will get away with having an easier image-level model for the precise task we’re excited by (which is cheaper to coach) and a universal temporal propagation model that only must be trained once and might work for various tasks. To make these two modules work together effectively, researchers use a bi-directional propagation approach. This helps to merge segmentation guesses from different frames in a way that makes the ultimate segmentation look consistent, even when it’s done online or in real time.

The above image provides us with an outline of the framework. The research team first filters image-level segmentations with in-clip consensus and temporally propagates this result forward. To include a brand new image segmentation at a later time step (for previously unseen objects, e.g., red box), they merge the propagated results with in-clip consensus. 

The approach adopted on this research makes significant use of external task-agnostic data, aiming to diminish dependence on the precise goal task. It leads to higher generalization capabilities, particularly for tasks with limited available data in comparison with end-to-end methods. It doesn’t even require fine-tuning. When paired with universal image segmentation models, this decoupled paradigm showcases cutting-edge performance. It most definitely represents an initial stride towards achieving state-of-the-art large-vocabulary video segmentation in an open-world context!


Take a look at the Paper, Github, and Project Page. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to affix our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more.

Should you like our work, you’ll love our newsletter..


Janhavi Lande, is an Engineering Physics graduate from IIT Guwahati, class of 2023. She is an upcoming data scientist and has been working on the earth of ml/ai research for the past two years. She is most fascinated by this ever changing world and its constant demand of humans to maintain up with it. In her pastime she enjoys traveling, reading and writing poems.


🚀 The top of project management by humans (Sponsored)

LEAVE A REPLY

Please enter your comment!
Please enter your name here