iacercalixto / variational_mmtLinks
Code base for the paper "Latent variable model for multi-modal translation".
☆16Updated 11 months ago
Alternatives and similar repositories for variational_mmt
Users that are interested in variational_mmt are comparing it to the libraries listed below
Sorting:
- Neural Machine Translation with universal Visual Representation (ICLR 2020)☆88Updated 5 years ago
- Code for the paper Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems (ACL19)☆100Updated 2 years ago
- Pytorch implementation of Multimodal Neural Machine Translation(MNMT).☆12Updated 4 years ago
- Implementation for "Large-scale Pretraining for Visual Dialog" https://arxiv.org/abs/1912.02379☆97Updated 5 years ago
- Multi-modal Neural Machine Translation in PyTorch☆44Updated 7 years ago
- ✨ Official PyTorch Implementation for EMNLP'19 Paper, "Dual Attention Networks for Visual Reference Resolution in Visual Dialog"☆45Updated 2 years ago
- This repository contains code used in our ACL'20 paper History for Visual Dialog: Do we really need it?☆34Updated 2 years ago
- ☆53Updated 5 years ago
- ☆53Updated 3 years ago
- Dataset and Source code for EMNLP 2019 paper "What You See is What You Get: Visual Pronoun Coreference Resolution in Dialogues"☆26Updated 3 years ago
- ☆27Updated 5 years ago
- Code for CVPR'19 "Recursive Visual Attention in Visual Dialog"☆64Updated 2 years ago
- Code for ACL 2020 paper "Dense-Caption Matching and Frame-Selection Gating for Temporal Localization in VideoQA." Hyounghun Kim, Zineng T…☆34Updated 5 years ago
- Code for "Dynamic Context-guided Capsule Network for Multimodal Machine Translation" (ACM MM2020)☆42Updated 3 years ago
- Code for ViLBERTScore in EMNLP Eval4NLP☆18Updated 2 years ago
- ☆21Updated 11 months ago
- An unreferenced image captioning metric (ACL-21)☆30Updated last year
- [Reproduce] Code for the EMNLP2018 paper "A Visual Attention Grounding Neural Model for Multimodal Machine Translation".☆11Updated 5 years ago
- The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".☆55Updated 3 years ago
- Code, Models and Datasets for OpenViDial Dataset☆131Updated 3 years ago
- ☆30Updated 4 years ago
- [EMNLP 2018] Training for Diversity in Image Paragraph Captioning☆89Updated 5 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-La…☆114Updated 3 years ago
- Information Maximizing Visual Question Generation☆66Updated last year
- BottomUpTopDown VQA model with question-type debiasing☆22Updated 5 years ago
- ☆24Updated 4 years ago
- Dataset and starting code for visual entailment dataset☆110Updated 3 years ago
- Dataset for Bilingual VLN☆11Updated 4 years ago
- ☆45Updated last month
- Starter code for the VMT task and challenge☆51Updated 4 years ago