thunlp / CPT
Colorful Prompt Tuning for Pre-trained Vision-Language Models
☆46Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for CPT
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆47Updated 2 years ago
- ☆56Updated 2 years ago
- ☆26Updated 9 months ago
- ☆55Updated last year
- Official Implementation for CVPR 2022 paper "Unsupervised Vision-Language Parsing: Seamlessly Bridging Visual Scene Graphs with Language …☆23Updated 2 years ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆31Updated 7 months ago
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆35Updated last year
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆54Updated 5 months ago
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆45Updated last year
- [ECCV'22 Poster] Explicit Image Caption Editing☆21Updated last year
- 📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)☆48Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆41Updated last year
- [ACL 2023] Delving into the Openness of CLIP☆23Updated last year
- Official implementation of our EMNLP 2022 paper "CPL: Counterfactual Prompt Learning for Vision and Language Models"☆32Updated last year
- [CVPR 2022] A large-scale public benchmark dataset for video question-answering, especially about evidence and commonsense reasoning. The…☆51Updated 4 months ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆36Updated last year
- [CVPR' 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆41Updated 3 months ago
- This repo contains code for Invariant Grounding for Video Question Answering☆26Updated last year
- Official Repository for CVPR 2022 paper "REX: Reasoning-aware and Grounded Explanation"☆18Updated last year
- ☆24Updated 4 months ago
- ☆25Updated 2 years ago
- NegCLIP.☆26Updated last year
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 2 years ago
- Video Graph Transformer for Video Question Answering (ECCV'22)☆45Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆22Updated 5 months ago
- Official repository for the A-OKVQA dataset☆64Updated 6 months ago
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆106Updated 2 years ago
- ☆63Updated 5 years ago