Sy-Zhang / MMC-PCFG
Video-aided Unsupervised Grammar Induction, NAACL‘21 [best long paper]
☆40Updated 2 years ago
Alternatives and similar repositories for MMC-PCFG:
Users that are interested in MMC-PCFG are comparing it to the libraries listed below
- ☆44Updated 2 years ago
- Code for WACV 2021 Paper "Meta Module Network for Compositional Visual Reasoning"☆43Updated 3 years ago
- Dataset and Source code for EMNLP 2019 paper "What You See is What You Get: Visual Pronoun Coreference Resolution in Dialogues"☆25Updated 3 years ago
- Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer (NeurIPS 2021))☆56Updated 2 years ago
- Visually Grounded PCFG Induction☆39Updated 2 years ago
- Code and data for "Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning" (EMNLP 2021).☆28Updated 3 years ago
- This repository contains code used in our ACL'20 paper History for Visual Dialog: Do we really need it?☆34Updated last year
- Multitask Multilingual Multimodal Pre-training☆71Updated 2 years ago
- [EMNLP 2020] What is More Likely to Happen Next? Video-and-Language Future Event Prediction☆48Updated 2 years ago
- Official codebase for ICLR oral paper Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling☆35Updated 2 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.