qunovo / NLPromptLinks
☆50Updated 2 months ago
Alternatives and similar repositories for NLPrompt
Users that are interested in NLPrompt are comparing it to the libraries listed below
Sorting:
- ☆45Updated 7 months ago
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆81Updated last year
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆101Updated last year
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆107Updated 3 months ago
- [ICLR 2025] Official Implementation of Local-Prompt: Extensible Local Prompts for Few-Shot Out-of-Distribution Detection☆45Updated last month
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆83Updated last year
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆275Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆80Updated 4 months ago
- 🔥MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [Official, ICCV 2023]☆30Updated 10 months ago
- [ICCV 2025] Official PyTorch Code for "Advancing Textual Prompt Learning with Anchored Attributes"☆97Updated last week
- ☆26Updated last year
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆125Updated 3 weeks ago
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆33Updated last year
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆53Updated last year
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆28Updated 9 months ago
- ☆22Updated last year
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆43Updated 6 months ago
- ☆50Updated last year
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆113Updated 9 months ago
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆41Updated last year
- cliptrase☆46Updated last year
- [NeurIPS2023] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning☆99Updated 2 months ago
- Exploring prompt tuning with pseudolabels for multiple modalities, learning settings, and training strategies.☆50Updated 10 months ago
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆44Updated 6 months ago
- [CVPR 2025] Official PyTorch Code for "DPC: Dual-Prompt Collaboration for Tuning Vision-Language Models"☆31Updated this week
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆90Updated last year
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆105Updated last year
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆56Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆101Updated last year
- Learning without Forgetting for Vision-Language Models (TPAMI 2025)☆44Updated 2 months ago