[AAAI 2023] Zero-Shot Enhancement of CLIP with Parameter-free Attention
☆93Apr 29, 2023Updated 2 years ago
Alternatives and similar repositories for CALIP
Users that are interested in CALIP are comparing it to the libraries listed below
Sorting:
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆106Aug 22, 2023Updated 2 years ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆149Apr 21, 2024Updated last year
- (ICCV2023) Official implementation of 'ViewRefer: Grasp the Multi-view Knowledge for 3D Visual Grounding with GPT and Prototype Guidance'…☆59Apr 18, 2024Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆381Jun 1, 2023Updated 2 years ago
- ☆200May 10, 2023Updated 2 years ago
- ☆661Nov 28, 2023Updated 2 years ago
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆85Apr 21, 2024Updated last year
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆92Jul 4, 2024Updated last year
- An official PyTorch implementation for CLIPPR☆30Jul 22, 2023Updated 2 years ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Apr 4, 2024Updated last year
- [COLING'25] HGCLIP: Exploring Vision-Language Models with Graph Representations for Hierarchical Understanding☆44Nov 30, 2024Updated last year
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆142Dec 2, 2023Updated 2 years ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆175Dec 14, 2023Updated 2 years ago
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆103Mar 6, 2024Updated last year
- PyTorch Implementation for InMaP☆11Oct 28, 2023Updated 2 years ago
- Align 3D Point Cloud with Multi-modalities for Large Language Models☆460Dec 9, 2023Updated 2 years ago
- ☆21Dec 15, 2025Updated 2 months ago
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆44Jun 14, 2023Updated 2 years ago
- LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections (NeurIPS 2023)☆29Dec 27, 2023Updated 2 years ago
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Sep 30, 2023Updated 2 years ago
- ☆574Jul 19, 2022Updated 3 years ago
- 【ICCV 2023】Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning & 【IJCV 2025】Diffusion-Enhanced Test-time Adap…☆70Jan 15, 2025Updated last year
- ☆17Sep 20, 2021Updated 4 years ago
- ☆105Dec 7, 2023Updated 2 years ago
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆21Jan 11, 2024Updated 2 years ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆46Dec 1, 2024Updated last year
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆61Jul 8, 2023Updated 2 years ago
- The efficient tuning method for VLMs☆80Mar 10, 2024Updated last year
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆81Jun 7, 2025Updated 8 months ago
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆48Sep 25, 2023Updated 2 years ago
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆208Oct 21, 2022Updated 3 years ago
- (AAAI2024) Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models☆57May 19, 2024Updated last year
- Enhancing Multimodal Compositional Reasoning of Visual Language Models with Generative Negative Mining, WACV 2024☆14Jan 3, 2024Updated 2 years ago
- ☆61May 2, 2025Updated 10 months ago
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆46Oct 15, 2023Updated 2 years ago
- ☆29Oct 18, 2022Updated 3 years ago
- ☆32May 12, 2021Updated 4 years ago
- ☆175Dec 29, 2023Updated 2 years ago