BatsResearch / fudd
Follow-Up Differential Descriptions: Language Models Resolve Ambiguities for Image Classification
☆11Updated last year
Alternatives and similar repositories for fudd
Users that are interested in fudd are comparing it to the libraries listed below
Sorting:
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆16Updated last year
- Exploring prompt tuning with pseudolabels for multiple modalities, learning settings, and training strategies.☆50Updated 6 months ago
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆57Updated last year
- ☆22Updated 11 months ago
- ☆26Updated last year
- Learning to compose soft prompts for compositional zero-shot learning.☆88Updated last year
- [NeurIPS '24] Frustratingly easy Test-Time Adaptation of VLMs!!☆44Updated last month
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆57Updated last year
- Compress conventional Vision-Language Pre-training data☆51Updated last year
- This repository houses the code for the paper - "The Neglected of VLMs"☆28Updated last week
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆36Updated 9 months ago
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆56Updated 11 months ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆49Updated last month
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆33Updated last year
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆105Updated last year
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆38Updated last year
- Create generated datasets and train robust classifiers☆35Updated last year
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆46Updated last year
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆78Updated last year
- LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections (NeurIPS 2023)☆28Updated last year
- ☆14Updated last year
- [ICML 2024] "Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection"☆12Updated 2 months ago
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆21Updated last year
- ☆10Updated 2 years ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 5 months ago
- [ACL 2023] Delving into the Openness of CLIP☆23Updated 2 years ago
- The efficient tuning method for VLMs☆81Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated 2 years ago
- ☆21Updated last year
- Code for Negative Yields Positive: Unified Dual-Path Adapter for Vision-Language Models☆26Updated 6 months ago