deep-spin / mcan-vqa-continuous-attention
☆23Updated 4 years ago
Alternatives and similar repositories for mcan-vqa-continuous-attention:
Users that are interested in mcan-vqa-continuous-attention are comparing it to the libraries listed below
- PyTorch code for "Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention" (WACV 2023)☆33Updated 2 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆42Updated 4 years ago
- Pytorch version of DeCEMBERT: Learning from Noisy Instructional Videos via Dense Captions and Entropy Minimization (NAACL 2021)☆17Updated 2 years ago
- ☆73Updated 2 years ago
- Code for the EMNLP 2021 Oral paper "Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias in Image Search" https://arx…☆11Updated 2 years ago
- MLPs for Vision and Langauge Modeling (Coming Soon)☆27Updated 3 years ago
- Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer (NeurIPS 2021))☆56Updated 2 years ago
- ☆22Updated 4 years ago
- Official codebase for ICLR oral paper Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling☆36Updated 3 years ago
- Latent Normalizing Flows for Many-to-Many Cross Domain Mappings (ICLR 2020)☆33Updated 2 years ago
- [EMNLP 2021] Code and data for our paper "Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers…☆20Updated 3 years ago
- codebase for the SIMAT dataset and evaluation☆39Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- [EMNLP 2020] What is More Likely to Happen Next? Video-and-Language Future Event Prediction☆48Updated 2 years ago
- [ACL-IJCNLP 2021] "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets" by Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, …☆18Updated 3 years ago
- ☆53Updated 3 years ago
- We present a framework for training multi-modal deep learning models on unlabelled video data by forcing the network to learn invariances…☆47Updated 3 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆34Updated 2 years ago
- Code and data for the project "Visually grounded continual learning of compositional semantics"☆21Updated 2 years ago
- ☆11Updated 4 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": LXMERT…☆21Updated 4 years ago
- Project page for "Visual Grounding in Video for Unsupervised Word Translation" CVPR 2020☆42Updated 4 years ago
- Audio Visual Scene-Aware Dialog (AVSD) Challenge at the 10th Dialog System Technology Challenge (DSTC)☆27Updated 2 years ago
- ☆36Updated 4 years ago
- [ACL 2023] Code and data for our paper "Measuring Progress in Fine-grained Vision-and-Language Understanding"☆13Updated last year
- Command-line tool for downloading and extending the RedCaps dataset.☆46Updated last year
- [ACL 2021] mTVR: Multilingual Video Moment Retrieval☆26Updated 2 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆118Updated 3 years ago
- Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning☆11Updated 4 months ago