☆109Dec 30, 2024Updated last year
Alternatives and similar repositories for EgoSchema
Users that are interested in EgoSchema are comparing it to the libraries listed below
Sorting:
- ☆32Jul 29, 2024Updated last year
- Code and Dataset for the CVPRW Paper "Where did I leave my keys? — Episodic-Memory-Based Question Answering on Egocentric Videos"☆29Aug 28, 2023Updated 2 years ago
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆185Aug 2, 2025Updated 7 months ago
- [ICCV 2025] LVBench: An Extreme Long Video Understanding Benchmark☆137Jul 9, 2025Updated 7 months ago
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆151Sep 10, 2024Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆54Mar 9, 2025Updated 11 months ago
- 🔥🔥MLVU: Multi-task Long Video Understanding Benchmark☆242Aug 21, 2025Updated 6 months ago
- An VideoQA dataset based on the videos from ActivityNet☆91Nov 22, 2020Updated 5 years ago
- Long Context Transfer from Language to Vision☆402Mar 18, 2025Updated 11 months ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆154Jun 23, 2025Updated 8 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆113Jul 27, 2024Updated last year
- ☆242Jun 4, 2025Updated 9 months ago
- Code for LifelongMemory: Leveraging LLMs for Answering Queries in Long-form Egocentric Videos☆28Oct 27, 2025Updated 4 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Oct 14, 2024Updated last year
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆731Dec 8, 2025Updated 2 months ago
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆106Oct 27, 2024Updated last year
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆511Nov 18, 2025Updated 3 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆101Nov 22, 2025Updated 3 months ago
- ☆16Sep 25, 2025Updated 5 months ago
- ☆18Jul 10, 2024Updated last year
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆83Feb 27, 2025Updated last year
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆347Jul 19, 2024Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆322Jan 20, 2025Updated last year
- [AAAI 26 Demo] Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal P…☆64Jan 27, 2026Updated last month
- ☆136Apr 16, 2025Updated 10 months ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆83Jul 1, 2024Updated last year
- Awesome papers & datasets specifically focused on long-term videos.☆355Oct 9, 2025Updated 4 months ago
- [ICML 2025] Official PyTorch implementation of LongVU☆424May 8, 2025Updated 9 months ago
- LinVT: Empower Your Image-level Large Language Model to Understand Videos☆84Dec 30, 2024Updated last year
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,492Aug 5, 2025Updated 7 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆129Apr 4, 2025Updated 11 months ago
- Code for our ACL 2025 paper "Language Repository for Long Video Understanding"☆34Jun 17, 2024Updated last year
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆54Apr 15, 2024Updated last year
- An automatic MLLM hallucination detection framework☆19Sep 26, 2023Updated 2 years ago
- [NeurIPS 2022] Egocentric Video-Language Pretraining☆256May 9, 2024Updated last year
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆288May 22, 2025Updated 9 months ago
- ☆41Sep 9, 2025Updated 5 months ago
- Egocentric Video Understanding Dataset (EVUD)☆33Jul 4, 2024Updated last year
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆37Apr 17, 2023Updated 2 years ago