KaiyangZhou / CoOpLinks
Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)
☆2,162Updated last year
Alternatives and similar repositories for CoOp
Users that are interested in CoOp are comparing it to the libraries listed below
Sorting:
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,209Updated 2 years ago
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".☆800Updated 2 years ago
- ☆569Updated 3 years ago
- ☆657Updated 2 years ago
- Code for ALBEF: a new vision-language pre-training method☆1,749Updated 3 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,231Updated last year
- A curated list of prompt-based paper in computer vision and vision-language learning.☆929Updated 2 years ago
- Grounded Language-Image Pre-training☆2,566Updated 2 years ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,520Updated last year
- A PyTorch toolbox for domain generalization, domain adaptation and semi-supervised learning.☆1,409Updated 2 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,312Updated 4 years ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,352Updated last year
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆895Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆805Updated last year
- Robust fine-tuning of zero-shot models☆759Updated 3 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,195Updated 2 years ago
- assistant tools for attention visualization in deep learning☆1,259Updated 3 years ago
- awesome grounding: A curated list of research papers in visual grounding☆1,124Updated 4 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆719Updated 3 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆672Updated 3 years ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,639Updated last year
- Explainability for Vision Transformers☆1,060Updated 3 years ago
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.☆745Updated last month
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,463Updated 7 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆864Updated 6 months ago
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆508Updated 10 months ago
- A collection of papers about Referring Image Segmentation.☆804Updated 2 months ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆780Updated 3 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆1,022Updated last year
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆858Updated last year