easonnie / mlp-vil
MLPs for Vision and Langauge Modeling (Coming Soon)
☆27Updated 2 years ago
Related projects: ⓘ
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 2 years ago
- ☆19Updated this week
- Pytorch version of DeCEMBERT: Learning from Noisy Instructional Videos via Dense Captions and Entropy Minimization (NAACL 2021)☆17Updated last year
- Code and data for the project "Visually grounded continual learning of compositional semantics"☆21Updated last year
- [EMNLP 2021] Code and data for our paper "Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers…☆20Updated 2 years ago
- Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer (NeurIPS 2021))☆56Updated last year
- Code for "Counterfactual Variable Control for Robust and Interpretable Question Answering"☆14Updated 3 years ago
- Code for WACV 2021 Paper "Meta Module Network for Compositional Visual Reasoning"☆43Updated 3 years ago
- ☆11Updated 4 years ago
- Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering☆24Updated 2 months ago
- ☆31Updated this week
- Pytorch implementation for our NeurIPS 2019 paper "TAB-VCR: Tags and Attributes based VCR Baselines" https://arxiv.org/abs/1910.14671☆19Updated 3 years ago
- PyTorch code for "Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention" (WACV 2023)☆32Updated last year
- Code for the model "Heterogeneous Graph Learning for Visual Commonsense Reasoning (NeurlPS 2019)"☆46Updated 4 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆33Updated 2 years ago
- Code for paper "Point and Ask: Incorporating Pointing into Visual Question Answering"☆18Updated last year
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated last year
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆29Updated last year
- Official codebase for ICLR oral paper Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling☆35Updated 2 years ago
- ☆32Updated 2 years ago
- Data of ACL 2019 Paper "Expressing Visual Relationships via Language".☆62Updated 3 years ago
- [EMNLP 2020] What is More Likely to Happen Next? Video-and-Language Future Event Prediction☆47Updated 2 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆34Updated 2 years ago
- Official code for the paper "Self-Distillation for Few-Shot Image Captioning"☆13Updated 3 years ago
- Data Release for VALUE Benchmark☆32Updated 2 years ago
- Multi-sense word embeddings from visual co-occurrences☆25Updated 5 years ago
- The SVO-Probes Dataset for Verb Understanding☆29Updated 2 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated last year
- Shows visual grounding methods can be right for the wrong reasons! (ACL 2020)☆23Updated 4 years ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆42Updated last year