yizhen-zhang / VG-BertLinks
Codes and scripts for "Explainable Semantic Space by Grounding Languageto Vision with Cross-Modal Contrastive Learning"
☆20Updated 3 years ago
Alternatives and similar repositories for VG-Bert
Users that are interested in VG-Bert are comparing it to the libraries listed below
Sorting:
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering☆27Updated last year
- Code and Experiments for ACL-IJCNLP 2021 Paper "Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for …☆57Updated 3 years ago
- Code for paper "Can contrastive learning avoid shortcut solutions?" NeurIPS 2021.☆47Updated 3 years ago
- [ICML 2022] This is the pytorch implementation of "Rethinking Attention-Model Explainability through Faithfulness Violation Test" (https:…☆19Updated 3 years ago
- NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering☆64Updated 4 years ago
- ☆158Updated 4 years ago
- Official codebase for ICLR oral paper Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling☆36Updated 3 years ago
- The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We…☆46Updated 2 years ago
- Code, data, models for the Sherlock corpus☆58Updated 2 years ago
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆31Updated 2 years ago
- [DMLR 2024] Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift☆38Updated last year
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- MLPs for Vision and Langauge Modeling (Coming Soon)☆27Updated 3 years ago
- [ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang☆63Updated 3 years ago
- ☆48Updated 3 years ago
- [EMNLP 2021] Code and data for our paper "Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers…☆20Updated 3 years ago
- ☆30Updated 2 years ago
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆24Updated 3 years ago
- MoCo with Alignment and Uniformity Loss.☆62Updated 3 years ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆43Updated 2 years ago
- Code for WACV 2021 Paper "Meta Module Network for Compositional Visual Reasoning"☆43Updated 4 years ago
- ☆81Updated last year
- CVPR 2022, Robust Contrastive Learning against Noisy Views☆84Updated 3 years ago
- Code for the ICLR 2022 paper "Attention-based interpretability with Concept Transformers"☆42Updated 3 weeks ago
- ☆40Updated 2 years ago
- Use this package to compute intrinsic dimensionality of your task given a fixed neural network in PYTORCH!☆36Updated 2 years ago
- Tensorflow implementation of Invariant Rationalization☆49Updated 2 years ago
- Demonstrates failures of bias mitigation methods under varying types/levels of biases (WACV 2021)☆25Updated last year