[NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"
☆319Jun 3, 2024Updated last year
Alternatives and similar repositories for CLIPA
Users that are interested in CLIPA are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆289Jan 14, 2024Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasets☆772Apr 28, 2025Updated 10 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,811Nov 27, 2025Updated 3 months ago
- code release of research paper "Exploring Long-Sequence Masked Autoencoders"☆100Oct 14, 2022Updated 3 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆675Sep 19, 2022Updated 3 years ago
- CLIP-like model evaluation☆802Jan 15, 2026Updated last month
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆782May 10, 2022Updated 3 years ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆213Feb 27, 2024Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆787Feb 9, 2023Updated 3 years ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆427Mar 30, 2023Updated 2 years ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆99May 3, 2024Updated last year
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆356Jul 22, 2025Updated 7 months ago
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆292Jun 7, 2023Updated 2 years ago
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆86Oct 29, 2023Updated 2 years ago
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆952Mar 19, 2025Updated 11 months ago
- Grounded Language-Image Pre-training☆2,572Jan 24, 2024Updated 2 years ago
- An open-source framework for training large multimodal models.☆4,068Aug 31, 2024Updated last year
- ☆29Oct 18, 2022Updated 3 years ago
- Robust fine-tuning of zero-shot models☆760Apr 29, 2022Updated 3 years ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,343Oct 5, 2023Updated 2 years ago
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259May 3, 2024Updated last year
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆893Aug 13, 2024Updated last year
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,402Aug 4, 2025Updated 6 months ago
- An open source implementation of CLIP.☆13,430Updated this week
- COYO-700M: Large-scale Image-Text Pair Dataset☆1,251Nov 30, 2022Updated 3 years ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆226Mar 20, 2025Updated 11 months ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆40Apr 18, 2025Updated 10 months ago
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022☆615Dec 13, 2022Updated 3 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆224Dec 16, 2022Updated 3 years ago
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆4,369Oct 19, 2025Updated 4 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,055Mar 2, 2024Updated 2 years ago
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆113Jun 9, 2023Updated 2 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,368May 19, 2025Updated 9 months ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆407Nov 10, 2023Updated 2 years ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆360Jan 14, 2025Updated last year
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆463May 9, 2022Updated 3 years ago
- Official implementation and data release of the paper "Visual Prompting via Image Inpainting".☆318Aug 7, 2023Updated 2 years ago
- Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut, ICML 2022.☆106Aug 1, 2023Updated 2 years ago