Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)
☆188Jun 21, 2025Updated 8 months ago
Alternatives and similar repositories for scaling-laws-openclip
Users that are interested in scaling-laws-openclip are comparing it to the libraries listed below
Sorting:
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆27Oct 28, 2024Updated last year
- CLIP-like model evaluation☆804Jan 15, 2026Updated last month
- An official PyTorch implementation for CLIPPR☆30Jul 22, 2023Updated 2 years ago
- Code for T-MARS data filtering☆35Aug 23, 2023Updated 2 years ago
- Patching open-vocabulary models by interpolating weights☆91Sep 28, 2023Updated 2 years ago
- ☆29Oct 18, 2022Updated 3 years ago
- Official code for the paper: "Metadata Archaeology"☆19May 10, 2023Updated 2 years ago
- ☆27Aug 28, 2023Updated 2 years ago
- Python package to download and use the SSB datasets☆11Aug 3, 2023Updated 2 years ago
- ☆10Jul 5, 2024Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆319Jun 3, 2024Updated last year
- Paper List for In-context Learning 🌷☆19Jan 3, 2023Updated 3 years ago
- Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?☆16Jun 3, 2025Updated 9 months ago
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆13Sep 30, 2023Updated 2 years ago
- PyTorch code for the CVPR'23 paper: "ConStruct-VL: Data-Free Continual Structured VL Concepts Learning"☆14Feb 5, 2024Updated 2 years ago
- An open source implementation of CLIP.☆13,460Feb 27, 2026Updated last week
- Code for the paper: "No Zero-Shot Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance" [NeurI…☆94Apr 29, 2024Updated last year
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆89Feb 13, 2024Updated 2 years ago
- ☆64Apr 9, 2024Updated last year
- Compress conventional Vision-Language Pre-training data☆53Sep 22, 2023Updated 2 years ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆62Dec 10, 2024Updated last year
- codebase for the SIMAT dataset and evaluation☆38Feb 16, 2022Updated 4 years ago
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆62Jul 8, 2023Updated 2 years ago
- Benchmarking and Analyzing Generative Data for Visual Recognition☆26Jul 25, 2023Updated 2 years ago
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆48Sep 25, 2023Updated 2 years ago
- [TACL] Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Nov 22, 2024Updated last year
- Extending context length of visual language models☆12Dec 18, 2024Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆408Nov 10, 2023Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasets☆772Apr 28, 2025Updated 10 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,648Aug 1, 2024Updated last year
- Exploring Visual Prompts for Adapting Large-Scale Models☆289Jun 6, 2022Updated 3 years ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,821Nov 27, 2025Updated 3 months ago
- Code for paper: Unified Text-to-Image Generation and Retrieval☆16Jul 6, 2024Updated last year
- ACM Multimedia 2023 (Oral) - RTQ: Rethinking Video-language Understanding Based on Image-text Model☆16Jan 31, 2024Updated 2 years ago
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆149Jun 13, 2024Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆675Sep 19, 2022Updated 3 years ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Nov 23, 2024Updated last year
- ViT trained on COYO-Labeled-300M dataset☆33Nov 24, 2022Updated 3 years ago
- Robust fine-tuning of zero-shot models☆760Apr 29, 2022Updated 3 years ago