gicheonkang / gst-visdialLinks
Official PyTorch Implementation for CVPR'23 Paper, "The Dialog Must Go On: Improving Visual Dialog via Generative Self-Training"
β20Updated 2 years ago
Alternatives and similar repositories for gst-visdial
Users that are interested in gst-visdial are comparing it to the libraries listed below
Sorting:
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlightβ37Updated 2 years ago
- π PyTorch Implementation for EMNLP'21 Findings "Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer"β13Updated 2 years ago
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022β31Updated 2 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learnersβ115Updated 3 years ago
- The SVO-Probes Dataset for Verb Understandingβ31Updated 3 years ago
- Official Repository for CVPR 2022 paper "REX: Reasoning-aware and Grounded Explanation"β22Updated 2 years ago
- [EMNLP'22] Weakly-Supervised Temporal Article Groundingβ14Updated 2 years ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionalityβ89Updated last year
- [EMNLP 2024] IFCap: Image-like Retrieval and Frequency-based Entity Filtering for Zero-shot Captioningβ15Updated 7 months ago
- β67Updated 2 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignmentβ22Updated 3 years ago
- Code for the ICCV'21 paper "Context-aware Scene Graph Generation with Seq2Seq Transformers"β43Updated 4 years ago
- Official repository for the A-OKVQA datasetβ108Updated last year
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (β¦β50Updated last year
- Implementation for the paper "Unified Multimodal Model with Unlikelihood Training for Visual Dialog"β13Updated 2 years ago
- [EMNLP 2020] What is More Likely to Happen Next? Video-and-Language Future Event Predictionβ51Updated 3 years ago
- Recent Advances in Visual Dialogβ30Updated 3 years ago
- β17Updated 2 years ago
- Controllable mage captioning model with unsupervised modesβ21Updated 2 years ago
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.β138Updated 2 years ago
- ESPERβ24Updated last year
- β15Updated 3 years ago
- An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA, AAAI 2022 (Oral)β86Updated 3 years ago
- [ECCV'22 Poster] Explicit Image Caption Editingβ22Updated 3 years ago
- Official implementation of our EMNLP 2022 paper "CPL: Counterfactual Prompt Learning for Vision and Language Models"β35Updated 3 years ago
- This repo contains code for Invariant Grounding for Video Question Answeringβ27Updated 2 years ago
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entitiesβ43Updated 7 months ago
- Source code for EMNLP 2022 paper βPEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Modelsββ49Updated 3 years ago
- [ICCV 2023] With a Little Help from your own Past: Prototypical Memory Networks for Image Captioning.β19Updated last year
- [AAAI2023] Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task (Oral)β40Updated last year