Jielin-Qiu / MM_Robustness
[DMLR 2024] Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift
☆35Updated last year
Alternatives and similar repositories for MM_Robustness:
Users that are interested in MM_Robustness are comparing it to the libraries listed below
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆33Updated last year
- Official implementation of our EMNLP 2022 paper "CPL: Counterfactual Prompt Learning for Vision and Language Models"☆33Updated 2 years ago
- ☆26Updated last year
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models (ACL-Findings 2024)☆15Updated 9 months ago
- Code for Debiasing Vision-Language Models via Biased Prompts☆56Updated last year
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆43Updated 7 months ago
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated last year
- ☆58Updated last year
- Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering☆26Updated 7 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆76Updated 10 months ago
- ☆64Updated 5 years ago
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆46Updated last year
- Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"☆31Updated 4 months ago
- NegCLIP.☆30Updated 2 years ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated last year
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆79Updated 9 months ago
- (ICML 2024) Improve Context Understanding in Multimodal Large Language Models via Multimodal Composition Learning☆27Updated 4 months ago
- visual question answering prompting recipes for large vision-language models☆24Updated 5 months ago
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆55Updated 8 months ago
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆33Updated 10 months ago
- Evaluate robustness of adaptation methods on large vision-language models☆17Updated last year
- The Continual Learning in Multimodality Benchmark☆66Updated last year
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆56Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆68Updated 8 months ago
- [AAAI2023] Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task (Oral)☆39Updated 10 months ago
- Counterfactual Reasoning VQA Dataset☆24Updated last year
- ☆28Updated 3 months ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆40Updated 11 months ago