BierOne / Attention-Faithfulness
[ICML 2022] This is the pytorch implementation of "Rethinking Attention-Model Explainability through Faithfulness Violation Test" (https://arxiv.org/abs/2201.12114).
☆19Updated 2 years ago
Alternatives and similar repositories for Attention-Faithfulness:
Users that are interested in Attention-Faithfulness are comparing it to the libraries listed below
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆33Updated last year
- Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering☆26Updated 9 months ago
- Official implementation of our EMNLP 2022 paper "CPL: Counterfactual Prompt Learning for Vision and Language Models"☆33Updated 2 years ago
- ☆25Updated 2 years ago
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆33Updated last year
- ☆19Updated last year
- Codes and scripts for "Explainable Semantic Space by Grounding Languageto Vision with Cross-Modal Contrastive Learning"☆21Updated 3 years ago
- ☆29Updated 2 years ago
- The official implementation of paper "Generalizing to Evolving Domains with Latent Structure-Aware Sequential Autoencoder"☆24Updated last year
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated last year
- This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regulari…☆21Updated 2 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"☆33Updated 6 months ago
- Code for Debiasing Vision-Language Models via Biased Prompts☆57Updated last year
- ☆40Updated 2 years ago
- ☆22Updated 10 months ago
- ☆29Updated last year
- Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning☆11Updated 3 months ago
- [NeurIPS 2023] Bootstrapping Vision-Language Learning with Decoupled Language Pre-training☆24Updated last year
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- ☆14Updated 3 years ago
- Code for the ICLR 2022 paper "Attention-based interpretability with Concept Transformers"☆40Updated 2 years ago
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- [DMLR 2024] Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift☆36Updated last year
- Source code for paper "Contrastive Out-of-Distribution Detection for Pretrained Transformers", EMNLP 2021☆40Updated 3 years ago
- [ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang☆63Updated 3 years ago
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- [ACL 2023] Delving into the Openness of CLIP☆23Updated 2 years ago
- Code implementation for paper "On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals".☆16Updated 3 years ago