Pendulibrium / ai-visual-storytelling-seq2seqLinks
Implementation of seq2seq model for Visual Storytelling Challenge (VIST) http://visionandlanguage.net/VIST/index.html
☆62Updated 7 years ago
Alternatives and similar repositories for ai-visual-storytelling-seq2seq
Users that are interested in ai-visual-storytelling-seq2seq are comparing it to the libraries listed below
Sorting:
- Code for the ACL paper "No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"☆137Updated 4 years ago
- GLAC Net: GLocal Attention Cascading Network for the Visual Storytelling Challenge☆45Updated 5 years ago
- [EMNLP 2018] PyTorch code for TVQA: Localized, Compositional Video Question Answering☆179Updated 2 years ago
- Codes of AAAI 2020 paper "What Makes A Good Story? Designing Composite Rewards for Visual Storytelling"☆26Updated 4 years ago
- ☆45Updated 3 months ago
- [EMNLP 2018] Training for Diversity in Image Paragraph Captioning☆90Updated 6 years ago
- Diagram question answering system described in "A Diagram is Worth a Dozen Images"☆38Updated 8 years ago
- Pre-trained V+L Data Preparation☆46Updated 5 years ago
- Novel Object Captioner - Captioning Images with diverse objects☆41Updated 7 years ago
- vist story telling evaluation tool☆21Updated last year
- Data of ACL 2019 Paper "Expressing Visual Relationships via Language".☆62Updated 4 years ago
- Cross-modal Coherence Modeling for Caption Generation☆11Updated 5 years ago
- Code for the CoNLL 2019 paper "Compositional Generalization in Image Captioning" by Mitja Nikolaus, Mostafa Abdou, Matthew Lamm, Rahul Ar…☆26Updated 5 years ago
- Data and code for CVPR 2020 paper: "VIOLIN: A Large-Scale Dataset for Video-and-Language Inference"☆162Updated 5 years ago
- [CVPR 2020] Transform and Tell: Entity-Aware News Image Captioning☆92Updated last year
- An implementation of the paper "Contextualize, Show and Tell: A Neural Visual Storyteller." presented at the Storytelling Workshop, co-lo…☆34Updated 6 years ago
- Support, annotation, evaluation, and baseline models for the imSitu dataset.☆58Updated 5 years ago
- Use transformer for captioning☆156Updated 6 years ago
- The code and output of our AAAI paper "Knowledge-Enriched Visual Storytelling"☆40Updated 4 years ago
- Good News Everyone! - CVPR 2019☆128Updated 3 years ago
- ☆29Updated 5 years ago
- Code for CVPR'19 "Recursive Visual Attention in Visual Dialog"☆64Updated 2 years ago
- ☆54Updated 5 years ago
- Code for the paper Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems (ACL19)☆100Updated 2 years ago
- PyTorch code for EMNLP 2020 paper "X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers"☆50Updated 4 years ago
- Starter code for the VMT task and challenge☆51Updated 5 years ago
- PyTorch code for EMNLP 2020 Paper "Vokenization: Improving Language Understanding with Visual Supervision"☆190Updated 4 years ago
- ☆32Updated 6 years ago
- Implementation of "MULE: Multimodal Universal Language Embedding"☆16Updated 5 years ago
- [ACL 2020] PyTorch code for TVQA+: Spatio-Temporal Grounding for Video Question Answering☆129Updated 2 years ago