Home Community Google and MIT Researchers Introduce Synclr: A Novel AI Approach for Learning Visual Representations Exclusively from Synthetic Images and Synthetic Captions with none Real Data

Google and MIT Researchers Introduce Synclr: A Novel AI Approach for Learning Visual Representations Exclusively from Synthetic Images and Synthetic Captions with none Real Data

0
Google and MIT Researchers Introduce Synclr: A Novel AI Approach for Learning Visual Representations Exclusively from Synthetic Images and Synthetic Captions with none Real Data

Raw and steadily unlabeled data will be retrieved and arranged using representation learning. The flexibility of the model to develop a great representation will depend on the amount, quality, and variety of the info. In doing so, the model mirrors the info’s inherent collective intelligence. The output is directly proportional to the input. Unsurprisingly, probably the most effective visual representation learning algorithms nowadays rely on massive real-world datasets. Real data collecting, meanwhile, has its own set of challenges. Collecting vast amounts of unfiltered data is possible because it shouldn’t be expensive. Adding uncurated data has less impact at large data scales, indicating poor scaling behavior for self-supervised representation learning using this approach. Collecting curated data on a smaller scale can also be possible, although models trained using this method can only handle very specific jobs. 

To scale back the financial burden, latest research by Google Research and MIT CSAIL investigates whether large-scale curated datasets that may train state-of-the-art visual representations could also be achieved using synthetic data derived from commercially available generative models. Learning from models describes this approach, which differs from learning directly from data. The team takes advantage of the brand new controls provided by models’ latent variables, conditioning variables, and hyperparameters to curate data within the proposed method, considered one of the many advantages of using models as a knowledge source for constructing large-scale training sets. Because models are less bulky than data, they’re easier to store and share. Furthermore, models can generate infinite data samples, albeit with limited variability. 

On this study, the researchers rethink the extent of detail in visual classes through the use of generative models. For example, consider the 4 pictures of the next commands: “A cute golden retriever sits in a house manufactured from sushi” and “A golden retriever, wearing sunglasses and a beach hat, rides a motorbike.” By separating the embeddings for various images without explicitly considering the identical semantics, traditional self-supervised methods like SimCLR will treat each image as a separate class. Yet, supervised learning algorithms (like SupCE) will treat all of those pictures as belonging to the identical class (like “golden retriever”). 

Since collecting several images described by a given caption is non-trivial, particularly when scaling up the variety of captions, this level of granularity is difficult to mine in real data. Alternatively, this capability is intrinsic to text-to-image diffusion models; with the identical caption as a training set and ranging noise inputs, these models can generate many images that exactly match the caption. 

The work’s findings show that in comparison with SimCLR and supervised training, the granularity on the caption level is superior. The undeniable fact that this visual class description is definitely extensible is an extra perk. Online class (or data) augmentation allows hypothetically scaling as much as unlimited classes, unlike ImageNet-1k/21k, where a hard and fast variety of classes is used.  There are three stages to the proposed system:

  1. Synthesizing an enormous collection of picture captions is the initial stage. Using word-to-caption translation examples, the team has developed a scalable method that takes advantage of the in-context learning capability of huge language models (LLMs). 
  2. The subsequent step is to create many artificial images and captions using a text-to-image diffusion model. A dataset of 600 million photos is generated in this fashion. 
  3. Finally, they train models for visual representations using masked image modeling and multi-positive contrastive learning. 

The researchers compare OpenAI’s CLIP regarding top-1 linear probing accuracy on ImageNet-1K with the ViT-B model at 80.7% and the ViT-L model at 83.0%, each trained with SynCLR pre-training. On fine-grained classification tasks, SynCLR achieves results comparable to those of DINO v2 models derived from a pre-trained ViT-g model, surpassing CLIP for ViT-B by 3.3% and ViT-L by 1.5%. Regarding semantic segmentation on ADE20k, SynCLR beats MAE pre-trained on ImageNet by 6.2 and 4.1 in mIoU for ViT-B and ViT-L, respectively, in the identical setup. This demonstrates that SynCLR has a robust capability to transfer to dense prediction tasks, very like DINO v2, which also requires training on images with a resolution of 518×518—something that SynCLR doesn’t possess.

The team highlights that there are several ways to enhance caption sets. For instance, they use more sophisticated LLMs, improve the sample ratios amongst distinct concepts, and expand the library of in-context examples. One strategy to improve the training process is so as to add a high-resolution training phase or an intermediate IN-21k fine-tuning stage after extracting knowledge from an even bigger model. In addition they suggest that along side SwiGLU and LayerScale integration, higher model initialization procedures can result in architectural advantages. Nevertheless, they suggest these areas for future research due to limited resources and the constraints of this paper, which didn’t aim to realize the best possible metrics. 


Take a look at the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to hitch our 35k+ ML SubReddit, 41k+ Facebook Community, Discord ChannelLinkedIn GroupTwitter, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more.

When you like our work, you’ll love our newsletter..


Dhanshree

” data-medium-file=”https://www.marktechpost.com/wp-content/uploads/2022/11/20221028_101632-Dhanshree-Shenwai-169×300.jpg” data-large-file=”https://www.marktechpost.com/wp-content/uploads/2022/11/20221028_101632-Dhanshree-Shenwai-576×1024.jpg”>

Dhanshree Shenwai is a Computer Science Engineer and has a great experience in FinTech firms covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is captivated with exploring latest technologies and advancements in today’s evolving world making everyone’s life easy.


🐝 Get stunning skilled headshots effortlessly with Aragon- TRY IT NOW!.

LEAVE A REPLY

Please enter your comment!
Please enter your name here