LAION-AI / CLIP_benchmark
CLIP-like model evaluation
☆677Updated last month
Alternatives and similar repositories for CLIP_benchmark:
Users that are interested in CLIP_benchmark are comparing it to the libraries listed below
- Robust fine-tuning of zero-shot models☆681Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasets☆687Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,371Updated last week
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,235Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆312Updated 9 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆649Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆708Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,118Updated last year
- Multi-modality pre-training☆487Updated 10 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆683Updated 2 years ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,025Updated 9 months ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆417Updated last year
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,264Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆393Updated last year
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆479Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆852Updated 4 months ago
- When do we not need larger vision models?☆380Updated last month
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆506Updated 11 months ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆384Updated 2 years ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆450Updated 8 months ago
- Recent Advances in Vision and Language Pre-training (VLP)☆293Updated last year
- ☆772Updated 8 months ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,561Updated last week
- Learning from synthetic data - code and models☆313Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆753Updated last year
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆762Updated 2 years ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆332Updated 2 months ago
- ☆602Updated last year
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆792Updated 7 months ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆715Updated 2 years ago