Pendulibrium / ai-visual-storytelling-seq2seqLinks
Implementation of seq2seq model for Visual Storytelling Challenge (VIST) http://visionandlanguage.net/VIST/index.html
☆62Updated 7 years ago
Alternatives and similar repositories for ai-visual-storytelling-seq2seq
Users that are interested in ai-visual-storytelling-seq2seq are comparing it to the libraries listed below
Sorting:
- Code for the ACL paper "No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"☆137Updated 4 years ago
- Cross-modal Coherence Modeling for Caption Generation☆11Updated 5 years ago
- [EMNLP 2018] PyTorch code for TVQA: Localized, Compositional Video Question Answering☆179Updated 2 years ago
- GLAC Net: GLocal Attention Cascading Network for the Visual Storytelling Challenge☆45Updated 5 years ago
- [CVPR 2020] Transform and Tell: Entity-Aware News Image Captioning☆92Updated last year
- Data and code for CVPR 2020 paper: "VIOLIN: A Large-Scale Dataset for Video-and-Language Inference"☆162Updated 5 years ago
- An implementation of the paper "Contextualize, Show and Tell: A Neural Visual Storyteller." presented at the Storytelling Workshop, co-lo…☆34Updated 6 years ago
- [EMNLP 2018] Training for Diversity in Image Paragraph Captioning☆90Updated 6 years ago
- The code and output of our AAAI paper "Knowledge-Enriched Visual Storytelling"☆40Updated 4 years ago
- ☆44Updated 4 months ago
- Codes of AAAI 2020 paper "What Makes A Good Story? Designing Composite Rewards for Visual Storytelling"☆26Updated 4 years ago
- Novel Object Captioner - Captioning Images with diverse objects☆41Updated 7 years ago
- vist story telling evaluation tool☆21Updated last year
- ☆32Updated 6 years ago
- Implementation for "Large-scale Pretraining for Visual Dialog" https://arxiv.org/abs/1912.02379☆97Updated 5 years ago
- Github repository for Plot and Rework: Modeling Storylines for Visual Storytelling (ACL-IJCNLP2021 Findings)☆21Updated 3 years ago
- ☆29Updated 5 years ago
- Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020☆81Updated 5 years ago
- Implementation of "MULE: Multimodal Universal Language Embedding"☆16Updated 5 years ago
- Recognition to Cognition Networks (code for the model in "From Recognition to Cognition: Visual Commonsense Reasoning", CVPR 2019)☆468Updated 4 years ago
- Diagram question answering system described in "A Diagram is Worth a Dozen Images"☆38Updated 8 years ago
- Code for the paper Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems (ACL19)☆100Updated 3 years ago
- BERT + Image Captioning☆134Updated 4 years ago
- PyTorch code for EMNLP 2020 paper "X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers"☆50Updated 4 years ago
- A pytorch implemention of "StyleNet: Generating Attractive Visual Captions with Styles"☆63Updated 4 years ago
- Repository to generate CLEVR-Dialog: A diagnostic dataset for Visual Dialog☆49Updated 5 years ago
- [ACL 2020] PyTorch code for TVQA+: Spatio-Temporal Grounding for Video Question Answering☆129Updated 2 years ago
- Pre-trained V+L Data Preparation☆46Updated 5 years ago
- ☆54Updated 5 years ago
- Use transformer for captioning☆156Updated 6 years ago