jacobmarks / awesome-clip-papers
The most impactful papers related to contrastive pretraining for multimodal models!
☆58Updated 11 months ago
Alternatives and similar repositories for awesome-clip-papers:
Users that are interested in awesome-clip-papers are comparing it to the libraries listed below
- ☆41Updated 3 weeks ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆100Updated 5 months ago
- Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.☆181Updated last year
- Open source implementation of "Vision Transformers Need Registers"☆163Updated 2 weeks ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆158Updated last year
- The official implementation of "Adapter is All You Need for Tuning Visual Tasks".☆77Updated 5 months ago
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆97Updated 9 months ago
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆109Updated 3 months ago
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆74Updated 5 months ago
- Official repo for our ICML 23 paper: "Multi-Modal Classifiers for Open-Vocabulary Object Detection"☆88Updated last year
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆66Updated 8 months ago
- Official repository of paper "Subobject-level Image Tokenization"☆65Updated 9 months ago
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆110Updated last year
- Code implementation of our NeurIPS 2023 paper: Vocabulary-free Image Classification☆106Updated last year
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆74Updated this week
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆100Updated 2 months ago
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆85Updated last year
- Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works☆185Updated 4 months ago
- [CVPR'23] Hard Patches Mining for Masked Image Modeling☆89Updated last year
- PyTorch implementation of ICML 2023 paper "SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation"☆87Updated last year
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆145Updated 4 months ago
- Object Recognition as Next Token Prediction (CVPR 2024 Highlight)☆171Updated last month
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆197Updated 8 months ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆123Updated 2 months ago
- This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆68Updated 7 months ago
- [ECCV 2024] Official Release of SILC: Improving vision language pretraining with self-distillation☆40Updated 4 months ago
- Continual Forgetting for Pre-trained Vision Models (CVPR 2024)☆54Updated 2 weeks ago
- Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆62Updated 6 months ago
- Generalized Out-of-Distribution Detection and Beyond in Vision Language Model Era: A Survey [Miyai+, arXiv2024]☆82Updated 2 weeks ago
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆50Updated 2 months ago