LAION-AI / CLIP_benchmarkLinks
CLIP-like model evaluation
☆791Updated 3 weeks ago
Alternatives and similar repositories for CLIP_benchmark
Users that are interested in CLIP_benchmark are comparing it to the libraries listed below
Sorting:
- Robust fine-tuning of zero-shot models☆756Updated 3 years ago
- DataComp: In search of the next generation of multimodal datasets☆750Updated 7 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆669Updated 3 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆717Updated 3 years ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆578Updated 2 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,270Updated 3 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,185Updated last year
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,122Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆320Updated last year
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,750Updated last week
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆403Updated 2 years ago
- When do we not need larger vision models?☆412Updated 9 months ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆470Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,229Updated last year
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆484Updated 2 years ago
- OpenAI CLIP text encoders for multiple languages!☆821Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆783Updated 2 years ago
- Multi-modality pre-training☆505Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆407Updated 4 months ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆719Updated 2 years ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆427Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆797Updated last year
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆498Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆487Updated 3 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆881Updated 2 years ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆524Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆567Updated this week
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆549Updated 6 months ago
- ☆689Updated 3 weeks ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆356Updated 10 months ago