Home Community Unlocking the Secrets of CLIP’s Data Success: Introducing MetaCLIP for Optimized Language-Image Pre-training

Unlocking the Secrets of CLIP’s Data Success: Introducing MetaCLIP for Optimized Language-Image Pre-training

0
Unlocking the Secrets of CLIP’s Data Success: Introducing MetaCLIP for Optimized Language-Image Pre-training

In recent times, there have been exceptional advancements in Artificial Intelligence, with many recent advanced models being introduced, especially in NLP and Computer Vision. CLIP is a neural network developed by OpenAI trained on an enormous dataset of text and image pairs. It has helped advance quite a few computer vision research and has supported modern recognition systems and generative models. Researchers imagine that CLIP owes its effectiveness to the information it was trained on, they usually imagine that uncovering the information curation process would allow them to create even simpler algorithms.

On this research paper, the researchers have tried to make the information curation approach of CLIP available to the general public and have introduced Metadata-Curated Language-Image Pre-training (MetaCLIP). MetaCLIP takes unorganized data and metadata derived from CLIP’s concepts, creates a balanced subset, and yields a balanced subset over the metadata distribution. It outperforms CLIP’s data on multiple benchmarks when applied to the CommonCrawl dataset with 400M image-text pairs.

The authors of this paper have applied the next principles to attain their goal:

  • The researchers have first curated a brand new dataset of 400M image-text pairs collected from various web sources.
  • Using substring matching, they align image-text pairs with metadata entries, which effectively associates unstructured texts with structured metadata.
  • All texts related to each metadata entry are then grouped into lists, making a mapping from each entry to the corresponding texts.
  • The associated list is then sub-sampled, ensuring a more balanced data distribution, making it more general-purpose for pre-training.
  • To formalize the curation process, they introduce an algorithm that goals to enhance scalability and reduce space complexity.

MetaCLIP curates data without using the pictures directly, but it surely still improves the alignment of visual content by controlling the standard and distribution of the text. The strategy of substring matching makes it more likely that the text will mention the entities within the image, which increases the possibility of finding the corresponding visual content. Moreover, balancing favors long-tailed entries, which can have more diverse visual content than head entries.

For experiments, the researchers used two pools of information – one to estimate a goal of 400M image-text pairs and the opposite to scale the curation process. As mentioned earlier, MetaCLIP outperforms CLIP when applied to CommonCrawl with 400M data points. Moreover, MetaCLIP outperforms CLIP on zero-shot ImageNet classification using ViT models of varied sizes. 

MetaCLIP achieves 70.8% accuracy on zero-shot ImageNet classification using a ViT-B model, while CLIP achieves 68.3% accuracy. MetaCLIP also achieves 76.2% accuracy using a ViT-L model, while CLIP achieves 75.5% accuracy. Scaling the training data to 2.5B image-text pairs and using the identical training budget and similar distribution further improves MetaCLIP’s accuracy to 79.2% for ViT-L and 80.5% for ViT-H. These are unprecedented results for zero-shot ImageNet classification.

In conclusion, in an try to understand the information curation strategy of OpenAI’s CLIP in order that its high performance may very well be replicated, the authors of this paper have introduced MetaCLIP, which outperforms CLIP’s data on multiple benchmarks. MetaCLIP achieves this by utilizing substring matching to align image-text pairs with metadata entries and sub-sampling the associated list to make sure a more balanced data distribution. This makes MetaCLIP a promising recent approach for data curation and has the potential to enable the event of even simpler algorithms.


Take a look at the Paper and Github. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to hitch our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the most recent AI research news, cool AI projects, and more.

For those who like our work, you’ll love our newsletter..

We’re also on Telegram and WhatsApp.


Arham Islam

” data-medium-file=”https://www.marktechpost.com/wp-content/uploads/2022/10/Screen-Shot-2022-10-03-at-10.48.33-PM-293×300.png” data-large-file=”https://www.marktechpost.com/wp-content/uploads/2022/10/Screen-Shot-2022-10-03-at-10.48.33-PM.png”>

I’m a Civil Engineering Graduate (2022) from Jamia Millia Islamia, Recent Delhi, and I actually have a keen interest in Data Science, especially Neural Networks and their application in various areas.


🔥 Meet Retouch4me: A Family of Artificial Intelligence-Powered Plug-Ins for Photography Retouching

LEAVE A REPLY

Please enter your comment!
Please enter your name here