Cyang-Zhao / Grad-Eclip
☆32Updated 2 weeks ago
Alternatives and similar repositories for Grad-Eclip:
Users that are interested in Grad-Eclip are comparing it to the libraries listed below
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆68Updated 2 months ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆159Updated last year
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆57Updated last year
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆28Updated 8 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆34Updated last year
- Exploring prompt tuning with pseudolabels for multiple modalities, learning settings, and training strategies.☆49Updated 5 months ago
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆106Updated last year
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding☆14Updated 4 months ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆86Updated 9 months ago
- [NeurIPS 2024] Code for Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models☆36Updated last month
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆36Updated 8 months ago
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆99Updated last year
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆38Updated 3 months ago
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆69Updated 10 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆62Updated 2 months ago
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆103Updated 3 months ago
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆47Updated last year
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆46Updated 9 months ago
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆76Updated 11 months ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆173Updated last year
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆19Updated last month
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆51Updated 2 months ago
- Official pytorch implementation of "RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language…☆10Updated 3 months ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆72Updated last year
- ☆20Updated 11 months ago
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆67Updated 6 months ago
- The PyTorch implementation for "DEAL: Disentangle and Localize Concept-level Explanations for VLMs" (ECCV 2024 Strong Double Blind)☆19Updated 5 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆27Updated 10 months ago
- ☆16Updated 6 months ago
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864☆65Updated last year