jacobmarks / awesome-clip-papersLinks
The most impactful papers related to contrastive pretraining for multimodal models!
☆67Updated last year
Alternatives and similar repositories for awesome-clip-papers
Users that are interested in awesome-clip-papers are comparing it to the libraries listed below
Sorting:
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 10 months ago
- Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.☆194Updated 2 years ago
- ☆50Updated 6 months ago
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆126Updated 3 months ago
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆98Updated last year
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆139Updated last year
- ☆111Updated last year
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆221Updated 10 months ago
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆85Updated last year
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆74Updated last month
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆122Updated last year
- PyTorch reimplementation of FlexiViT: One Model for All Patch Sizes☆62Updated last year
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆56Updated 8 months ago
- Official Implementation of Attentive Mask CLIP (ICCV2023, https://arxiv.org/abs/2212.08653)☆32Updated last year
- Official Pytorch Implementation of Paper "A Semantic Space is Worth 256 Language Descriptions: Make Stronger Segmentation Models with Des…☆55Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆44Updated 2 years ago
- Code implementation of our NeurIPS 2023 paper: Vocabulary-free Image Classification☆107Updated last year
- ☆41Updated 6 months ago
- ☆42Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆163Updated 9 months ago
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆113Updated 2 years ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆68Updated 2 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆206Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆134Updated 2 months ago
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representations☆86Updated 3 weeks ago
- [ECCV 2024] Official Release of SILC: Improving vision language pretraining with self-distillation☆44Updated 9 months ago
- This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.☆51Updated last year
- ☆91Updated 2 years ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆61Updated last year
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆88Updated last year