LAION-AI / scaling-laws-openclipView external linksLinks
Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)
☆188Jun 21, 2025Updated 7 months ago
Alternatives and similar repositories for scaling-laws-openclip
Users that are interested in scaling-laws-openclip are comparing it to the libraries listed below
Sorting:
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆28Oct 28, 2024Updated last year
- CLIP-like model evaluation☆800Jan 15, 2026Updated last month
- An official PyTorch implementation for CLIPPR☆30Jul 22, 2023Updated 2 years ago
- Code for T-MARS data filtering☆35Aug 23, 2023Updated 2 years ago
- Patching open-vocabulary models by interpolating weights☆91Sep 28, 2023Updated 2 years ago
- Official code for the paper: "Metadata Archaeology"☆19May 10, 2023Updated 2 years ago
- ☆29Oct 18, 2022Updated 3 years ago
- ☆27Aug 28, 2023Updated 2 years ago
- ☆10Jul 5, 2024Updated last year
- Python package to download and use the SSB datasets☆11Aug 3, 2023Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆320Jun 3, 2024Updated last year
- Paper List for In-context Learning 🌷☆20Jan 3, 2023Updated 3 years ago
- PyTorch code for the CVPR'23 paper: "ConStruct-VL: Data-Free Continual Structured VL Concepts Learning"☆14Feb 5, 2024Updated 2 years ago
- Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?☆15Jun 3, 2025Updated 8 months ago
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Sep 30, 2023Updated 2 years ago
- An open source implementation of CLIP.☆13,383Updated this week
- Code for the paper: "No Zero-Shot Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance" [NeurI…☆94Apr 29, 2024Updated last year
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆89Feb 13, 2024Updated 2 years ago
- ☆64Apr 9, 2024Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆61Dec 10, 2024Updated last year
- Compress conventional Vision-Language Pre-training data☆53Sep 22, 2023Updated 2 years ago
- codebase for the SIMAT dataset and evaluation☆38Feb 16, 2022Updated 3 years ago
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆61Jul 8, 2023Updated 2 years ago
- Benchmarking and Analyzing Generative Data for Visual Recognition☆26Jul 25, 2023Updated 2 years ago
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆48Sep 25, 2023Updated 2 years ago
- Extending context length of visual language models☆12Dec 18, 2024Updated last year
- [TACL] Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Nov 22, 2024Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆405Nov 10, 2023Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasets☆770Apr 28, 2025Updated 9 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,648Aug 1, 2024Updated last year
- Exploring Visual Prompts for Adapting Large-Scale Models☆287Jun 6, 2022Updated 3 years ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,812Nov 27, 2025Updated 2 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆674Sep 19, 2022Updated 3 years ago
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆149Jun 13, 2024Updated last year
- Code for paper: Unified Text-to-Image Generation and Retrieval☆16Jul 6, 2024Updated last year
- ViT trained on COYO-Labeled-300M dataset☆33Nov 24, 2022Updated 3 years ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Nov 23, 2024Updated last year
- Robust fine-tuning of zero-shot models☆760Apr 29, 2022Updated 3 years ago
- The SVO-Probes Dataset for Verb Understanding☆31Jan 28, 2022Updated 4 years ago