changdaeoh / multimodal-mixupLinks
Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"
☆36Updated last year
Alternatives and similar repositories for multimodal-mixup
Users that are interested in multimodal-mixup are comparing it to the libraries listed below
Sorting:
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆34Updated 2 years ago
- ☆29Updated 3 years ago
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆61Updated 2 years ago
- [DMLR 2024] Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift☆38Updated last year
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆38Updated 2 years ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆168Updated 3 years ago
- Learning to compose soft prompts for compositional zero-shot learning.☆93Updated 4 months ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated 2 years ago
- Code for Debiasing Vision-Language Models via Biased Prompts☆60Updated 2 years ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆53Updated 9 months ago
- ☆23Updated last year
- Official Pytorch implementation of "Improved Probabilistic Image-Text Representations" (ICLR 2024)☆58Updated last year
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆56Updated last year
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆59Updated 2 years ago
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆109Updated 2 years ago
- Official implementation of our EMNLP 2022 paper "CPL: Counterfactual Prompt Learning for Vision and Language Models"