LAION-AI / CLIP_benchmark
CLIP-like model evaluation
☆615Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for CLIP_benchmark
- DataComp: In search of the next generation of multimodal datasets☆657Updated 10 months ago
- Robust fine-tuning of zero-shot models☆649Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆693Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆636Updated 2 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,215Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆298Updated 5 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆665Updated 2 years ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆407Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,255Updated this week
- GIT: A Generative Image-to-text Transformer for Vision and Language☆549Updated 11 months ago
- Multi-modality pre-training☆471Updated 6 months ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,067Updated 11 months ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆747Updated last year
- Recent Advances in Vision and Language Pre-training (VLP)☆288Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆260Updated 10 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆970Updated 5 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,136Updated 4 months ago
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆478Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆782Updated 5 months ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆368Updated last year
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal …☆362Updated 11 months ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆389Updated last year
- OpenAI CLIP text encoders for multiple languages!☆763Updated last year
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆509Updated 9 months ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,474Updated this week
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆881Updated 7 months ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆433Updated 10 months ago
- ☆470Updated 2 years ago
- A Survey on multimodal learning research.