jacobmarks / awesome-clip-papersLinks
The most impactful papers related to contrastive pretraining for multimodal models!
☆68Updated last year
Alternatives and similar repositories for awesome-clip-papers
Users that are interested in awesome-clip-papers are comparing it to the libraries listed below
Sorting:
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 10 months ago
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆222Updated 11 months ago
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆85Updated last year
- Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.☆194Updated 2 years ago
- Proteus (ICLR2025)☆49Updated 4 months ago
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆127Updated 4 months ago
- A Contrastive Learning Boost from Intermediate Pre-Trained Representations☆42Updated 10 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆207Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆44Updated 2 years ago
- ☆51Updated 6 months ago
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆75Updated 2 months ago
- PyTorch reimplementation of FlexiViT: One Model for All Patch Sizes☆61Updated last year
- ☆119Updated last year
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆27Updated 3 months ago
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆139Updated last year
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆98Updated last year
- Official repo for our ICML 23 paper: "Multi-Modal Classifiers for Open-Vocabulary Object Detection"☆93Updated 2 years ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆123Updated last year
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆98Updated 4 months ago
- ☆35Updated 3 weeks ago
- This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.☆51Updated last year
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆113Updated 2 years ago
- Official repository of paper "Subobject-level Image Tokenization" (ICML-25)☆80Updated last month
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆125Updated 11 months ago
- ☆42Updated last year
- PyTorch implementation of ICML 2023 paper "SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation"☆93Updated 2 years ago
- Generalized Out-of-Distribution Detection and Beyond in Vision Language Model Era: A Survey [Miyai+, TMLR2025]☆95Updated last month
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆161Updated 10 months ago
- Continual Forgetting for Pre-trained Vision Models (CVPR 2024)☆67Updated 3 weeks ago
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆86Updated 4 months ago