zycheiheihei / Transferable-Visual-PromptingLinks
[CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompting for Multimodal Large Language Models" has been accepted in CVPR2024.
☆46Updated last year
Alternatives and similar repositories for Transferable-Visual-Prompting
Users that are interested in Transferable-Visual-Prompting are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆60Updated last year
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆51Updated last year
- Official pytorch implementation of "RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language…☆14Updated last year
- Exploring prompt tuning with pseudolabels for multiple modalities, learning settings, and training strategies.☆51Updated last year
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆39Updated last year
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆109Updated last year
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 10 months ago
- ☆18Updated last year
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆110Updated last year
- ☆27Updated last year
- ECCV24, NeurIPS24, Benchmarking Generalized Out-of-Distribution Detection with Vision-Language Models☆28Updated last year
- Hyperbolic Safety-Aware Vision-Language Models. CVPR 2025☆26Updated 9 months ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆56Updated last year
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆24Updated last year
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆69Updated 3 months ago
- [ICLR 2024] Test-Time RL with CLIP Feedback for Vision-Language Models.☆97Updated 2 months ago
- ☆11Updated last year
- Official code for "Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models" (TCSVT'2023)☆29Updated 2 years ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆39Updated 8 months ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆44Updated 2 years ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆75Updated 2 years ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆23Updated 8 months ago
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆103Updated last year
- ☆13Updated 3 years ago
- Official Repository of Personalized Visual Instruct Tuning☆33Updated 10 months ago
- [ICLR 2024] Real-Fake: Effective Training Data Synthesis Through Distribution Matching☆78Updated 2 years ago
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆55Updated 2 years ago
- CLIP-MoE: Mixture of Experts for CLIP☆51Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆53Updated 9 months ago
- This repository houses the code for the paper - "The Neglected of VLMs"☆30Updated 2 weeks ago