zycheiheihei / Transferable-Visual-Prompting
[CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompting for Multimodal Large Language Models" has been accepted in CVPR2024.
☆34Updated last month
Alternatives and similar repositories for Transferable-Visual-Prompting:
Users that are interested in Transferable-Visual-Prompting are comparing it to the libraries listed below
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆38Updated last year
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆31Updated 10 months ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆60Updated 3 months ago
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- ☆26Updated last year
- Exploring prompt tuning with pseudolabels for multiple modalities, learning settings, and training strategies.☆47Updated 2 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆33Updated last month
- [CVPR 2024 Highlight] ImageNet-D☆40Updated 3 months ago
- ☆22Updated 7 months ago
- Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆61Updated 5 months ago
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆56Updated last year
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆66Updated 6 months ago
- This repository houses the code for the paper - "The Neglected of VLMs"☆25Updated last month
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆44Updated last month
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)☆31Updated last year
- ☆20Updated last year
- NegCLIP.☆30Updated last year
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆39Updated 6 months ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆68Updated last year
- Official code for "Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models" (TCSVT'2023)☆27Updated last year
- [CVPR' 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆45Updated 6 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆25Updated 8 months ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆43Updated 6 months ago
- 【ICCV 2023】Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning & 【IJCV 2025】Diffusion-Enhanced Test-time Adap…☆60Updated 2 weeks ago
- [Arxiv 2024] AGLA: Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆21Updated 6 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆76Updated 10 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆62Updated 7 months ago
- Augmenting with Language-guided Image Augmentation (ALIA)☆70Updated last year
- Towards Unified and Effective Domain Generalization☆29Updated last year