microsoft / M3P
Multitask Multilingual Multimodal Pre-training
☆71Updated 2 years ago
Alternatives and similar repositories for M3P:
Users that are interested in M3P are comparing it to the libraries listed below
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER…☆118Updated 4 years ago
- Code, Models and Datasets for OpenViDial Dataset☆131Updated 3 years ago
- Neural Machine Translation with universal Visual Representation (ICLR 2020)☆88Updated 4 years ago
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆73Updated 2 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-La…☆114Updated 2 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆90Updated last year
- ☆44Updated 2 years ago
- Starter Code for VALUE benchmark☆80Updated 2 years ago
- [EMNLP 2020] What is More Likely to Happen Next? Video-and-Language Future Event Prediction☆48Updated 2 years ago
- Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer (NeurIPS 2021))☆56Updated 2 years ago
- ☆53Updated 3 years ago
- Code and data for "Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning" (EMNLP 2021).☆28Updated 3 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- DSTC10 Track1 - MOD: Internet Meme Incorporated Open-domain Dialog☆50Updated 2 years ago
- Implementation for "Large-scale Pretraining for Visual Dialog" https://arxiv.org/abs/1912.02379☆96Updated 4 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": LXMERT…☆21Updated 4 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- ☆101Updated 2 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated last year
- Pre-trained V+L Data Preparation☆46Updated 4 years ago
- Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]☆57Updated 2 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- Dataset and codes for the paper "Product-oriented Machine Translation with Cross-modal Cross-lingual Pre-training".☆25Updated 3 years ago
- Dataset and starting code for visual entailment dataset☆109Updated 2 years ago
- Video-aided Unsupervised Grammar Induction, NAACL‘21 [best long paper]☆40Updated 2 years ago
- Counterfactual Samples Synthesizing for Robust VQA☆77Updated 2 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆120Updated 3 years ago
- This repository contains code used in our ACL'20 paper History for Visual Dialog: Do we really need it?☆34Updated last year
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 2 years ago