ExplainableML / KG-SPLinks
PyTorch code of our KG-SP method for Compositional Zero-Shot Learning
☆12Updated 2 years ago
Alternatives and similar repositories for KG-SP
Users that are interested in KG-SP are comparing it to the libraries listed below
Sorting:
- Paper list of compositional zero-shot learning☆10Updated 3 years ago
- Official code for "Disentangling Visual Embeddings for Attributes and Objects" Published at CVPR 2022☆35Updated 2 years ago
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆21Updated last year
- [ECCV-2022]Grounding Visual Representations with Texts for Domain Generalization☆31Updated 2 years ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections (NeurIPS 2023)☆29Updated last year
- ☆62Updated 2 years ago
- Official Implementation of LADS (Latent Augmentation using Domain descriptionS)☆52Updated 2 years ago
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆41Updated last year
- Multi-label Image Recognition with Partial Labels (IJCV'24, ESWA'24, AAAI'22)☆40Updated last year
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆57Updated 2 years ago
- ☆27Updated last year
- [CVPR 2023] Learning Attention as Disentangler for Compositional Zero-shot Learning☆39Updated 2 years ago
- [AAAI 2022 Oral] This is a Pytorch implementation of the AAAI 2022 paper "Cross-Domain Empirical Risk Minimization for Unbiased Long-tail…☆33Updated 3 years ago
- ☆59Updated 4 months ago
- Learning to compose soft prompts for compositional zero-shot learning.☆90Updated last week
- Official code for "Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models" (TCSVT'2023)☆28Updated last year
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆116Updated 3 years ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆33Updated 2 years ago
- ☆42Updated 8 months ago
- Hypergraph-Induced Semantic Tuplet Loss for Deep Metric Learning [CVPR'22]☆24Updated 3 years ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆43Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆51Updated 5 months ago
- Towards a Unified View on Visual Parameter-Efficient Transfer Learning☆26Updated 2 years ago
- Learning Bottleneck Concepts in Image Classification (CVPR 2023)☆40Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 9 months ago
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".☆16Updated 2 years ago
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆20Updated last year
- Benchmark data for "Rethinking Benchmarks for Cross-modal Image-text Retrieval" (SIGIR 2023)☆25Updated 2 years ago
- Official Code for ICML 2023 Paper: On the Generalization of Multi-modal Contrastive Learning☆26Updated last year