Code and Dataset for the CVPRW Paper "Where did I leave my keys? — Episodic-Memory-Based Question Answering on Egocentric Videos"
☆29Aug 28, 2023Updated 2 years ago
Alternatives and similar repositories for qaego4d
Users that are interested in qaego4d are comparing it to the libraries listed below
Sorting:
- ☆109Dec 30, 2024Updated last year
- [ECCV 2022] GEB+: A Benchmark for Generic Event Boundary Captioning, Grounding and Retrieval☆17Aug 24, 2022Updated 3 years ago
- [NeurIPS 2022] Egocentric Video-Language Pretraining☆256May 9, 2024Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆64Sep 13, 2024Updated last year
- An alternative EQA paradigm and informative benchmark + models (BMVC 2019, ViGIL 2019 spotlight)☆25Jun 22, 2022Updated 3 years ago
- ☆26Apr 26, 2025Updated 10 months ago
- Repository for MarioQA: Answering Questions by Watching Gameplay Videos in ICCV 2017☆10Oct 28, 2025Updated 4 months ago
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆54Apr 15, 2024Updated last year
- ☆132May 30, 2024Updated last year
- ☆57Apr 4, 2024Updated last year
- In-the-wild Question Answering☆15May 10, 2023Updated 2 years ago
- WorldSense benchmark for grounded reasoning in language models☆24Nov 28, 2023Updated 2 years ago
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Nov 10, 2023Updated 2 years ago
- The champion solution for Ego4D Natural Language Queries Challenge in CVPR 2023☆18Jan 23, 2024Updated 2 years ago
- An VideoQA dataset based on the videos from ActivityNet☆91Nov 22, 2020Updated 5 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Apr 15, 2022Updated 3 years ago
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆56Aug 8, 2023Updated 2 years ago
- ☆20Apr 24, 2024Updated last year
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆33May 27, 2025Updated 9 months ago
- ☆53Jan 3, 2023Updated 3 years ago
- The implementation of "Learning by Planning: Language-Guided Global Image Editing"☆25May 10, 2023Updated 2 years ago
- [NeurIPS 2024] MSR3D: Advanced Situated Reasoning in 3D Scenes☆70Dec 2, 2025Updated 3 months ago
- Visual Navigation with Natural Multimodal Assistance (EMNLP 2019)☆29Jun 30, 2020Updated 5 years ago
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆185Aug 2, 2025Updated 7 months ago
- ICCV 2021: A brand new hub for Scene Graph Generation methods based on MMdetection (2021). The pipeline of from detection, scene graph ge…☆63Oct 12, 2021Updated 4 years ago
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆28Dec 28, 2023Updated 2 years ago
- Simple script to compute CLIP-based scores given a DALL-e trained model.☆29Jun 13, 2021Updated 4 years ago
- The offical implemention of JM3D.☆31Aug 18, 2025Updated 6 months ago
- This repo contains source code for Glance and Focus: Memory Prompting for Multi-Event Video Question Answering (Accepted in NeurIPS 2023)☆31Jun 28, 2024Updated last year
- ☆32Feb 8, 2024Updated 2 years ago
- ☆66Jun 16, 2023Updated 2 years ago
- ☆25Mar 15, 2022Updated 3 years ago
- Code accompanying EGO-TOPO: Environment Affordances from Egocentric Video (CVPR 2020)☆31Aug 3, 2022Updated 3 years ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Sep 25, 2024Updated last year
- ROCK model for Knowledge-Based VQA in Videos☆31Oct 19, 2020Updated 5 years ago
- Ego4d dataset repository. Download the dataset, visualize, extract features & example usage of the dataset☆537Feb 19, 2026Updated last week
- The 1st place solution of 2022 Ego4d Natural Language Queries.☆32Sep 5, 2022Updated 3 years ago
- This is the code related to "Context-aware Alignment and Mutual Masking for 3D-Language Pre-training" (CVPR 2023).☆29Jun 15, 2023Updated 2 years ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆132Nov 10, 2024Updated last year