NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)
☆30Jul 18, 2023Updated 2 years ago
Alternatives and similar repositories for NExT-OE
Users that are interested in NExT-OE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆188Aug 2, 2025Updated 7 months ago
- Video as Conditional Graph Hierarchy for Multi-Granular Question Answering (AAAI'22, Oral)☆34Sep 17, 2022Updated 3 years ago
- This repo contains code for Invariant Grounding for Video Question Answering☆27Mar 2, 2023Updated 3 years ago
- An VideoQA dataset based on the videos from ActivityNet☆91Nov 22, 2020Updated 5 years ago
- ☆18Nov 1, 2023Updated 2 years ago
- ☆10Mar 30, 2022Updated 3 years ago
- ☆19Nov 25, 2022Updated 3 years ago
- Beyond RNNs: Positional Self-Attention with Co-Attention for Video Question Answering☆27Apr 15, 2021Updated 4 years ago
- Implementation for the paper "Hierarchical Conditional Relation Networks for Video Question Answering" (Le et al., CVPR 2020, Oral)☆135Jul 25, 2024Updated last year
- 📖The Big-&-Extending-Repository-of-Transformers: Pretrained PyTorch models for Google's BERT, OpenAI GPT & GPT-2, Google/CMU Transformer…☆11May 30, 2019Updated 6 years ago
- Implementation for the journal paper "DualVGR: A Dual-Visual Graph Reasoning Unit for Video Question Answering" (Jianyu et al., IEEE Tran…☆18Jun 22, 2021Updated 4 years ago
- [ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos☆127Sep 29, 2023Updated 2 years ago
- [ACL 2020] PyTorch code for TVQA+: Spatio-Temporal Grounding for Video Question Answering☆133Oct 25, 2022Updated 3 years ago
- A huge dataset for Document Visual Question Answering☆21Jul 29, 2024Updated last year
- ROCK model for Knowledge-Based VQA in Videos☆31Oct 19, 2020Updated 5 years ago
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (…☆51May 29, 2024Updated last year
- Heterogeneous Memory Enhanced Multimodal Attention Model for VideoQA☆54Sep 13, 2021Updated 4 years ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆115Jul 27, 2024Updated last year
- Repository for our CVPR 2017 and IJCV: TGIF-QA☆177Sep 6, 2021Updated 4 years ago
- Learning Interactions and Relationships between Movie Characters (CVPR'20)☆22Apr 12, 2023Updated 2 years ago
- ☆11Sep 1, 2020Updated 5 years ago
- [EMNLP 2018] PyTorch code for TVQA: Localized, Compositional Video Question Answering☆182Oct 25, 2022Updated 3 years ago
- [EMNLP 2024] TraveLER: A Modular Multi-LMM Agent Framework for Video Question-Answering☆16Oct 31, 2024Updated last year
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆78Mar 26, 2025Updated 11 months ago
- [CVPR 2026] FluxMem: Adaptive Hierarchical Memory for Streaming Video Understanding☆45Mar 16, 2026Updated last week
- Lightweight Multi-Level Multi-Scale Feature Fusion Network for Semantic Segmentation☆11May 31, 2021Updated 4 years ago
- ☆12Apr 7, 2014Updated 11 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆730Aug 8, 2023Updated 2 years ago
- PyTorch code for the Findings of EMNLP 2021 paper "Does Vision-and-Language Pretraining Improve Lexical Grounding?"☆11Sep 26, 2021Updated 4 years ago
- Official Pytorch implementation of 'Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning'? (ICLR2024)☆13Mar 8, 2024Updated 2 years ago
- Video Question Answering via Gradually Refined Attention over Appearance and Motion☆174Dec 5, 2017Updated 8 years ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆84Jul 1, 2024Updated last year
- This is the code of ECCV 2022 (Oral) paper "Fine-Grained Scene Graph Generation with Data Transfer".☆103Jan 24, 2023Updated 3 years ago
- ☆41Sep 9, 2025Updated 6 months ago
- \infty-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation☆19Feb 14, 2025Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆361Jan 14, 2025Updated last year
- https://avocado-captioner.github.io/☆31Oct 16, 2025Updated 5 months ago
- SODA: Story Oriented Dense Video Captioning Evaluation Framework☆14May 3, 2024Updated last year
- [CVPR 2023] LOGO: A Long-Form Video Dataset for Group Action Quality Assessment☆46Apr 9, 2024Updated last year