UCSC-VLAA / CLIPALinks
[NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"
☆320Updated last year
Alternatives and similar repositories for CLIPA
Users that are interested in CLIPA are comparing it to the libraries listed below
Sorting:
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆188Updated 7 months ago
- Densely Captioned Images (DCI) dataset repository.☆195Updated last year
- Learning from synthetic data - code and models☆327Updated 2 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆288Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆141Updated last month
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆224Updated 3 years ago
- When do we not need larger vision models?☆412Updated last year
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆213Updated last year
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259Updated last year
- ☆231Updated 2 years ago
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆290Updated last year
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated 2 years ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆131Updated last year
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)☆340Updated 2 years ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆250Updated last year
- ☆241Updated 8 months ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆427Updated 2 years ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆206Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆414Updated 6 months ago
- ☆103Updated 2 years ago
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆149Updated last year
- Official Implementation for "MyVLM: Personalizing VLMs for User-Specific Queries" (ECCV 2024)☆185Updated last year
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆129Updated 3 months ago
- ☆133Updated 2 years ago
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆196Updated 2 years ago
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆485Updated 2 years ago
- Matryoshka Multimodal Models☆122Updated last year
- TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering☆181Updated last year
- Official PyTorch implementation of the paper "In-Context Learning Unlocked for Diffusion Models"☆413Updated last year
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆268Updated 3 weeks ago