Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning
β141Aug 21, 2025Updated 6 months ago
Alternatives and similar repositories for Ego-R1
Users that are interested in Ego-R1 are comparing it to the libraries listed below
Sorting:
- π§ VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning (ICLR 2026)β305Feb 8, 2026Updated 3 weeks ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?β88Jul 13, 2025Updated 7 months ago
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Modelsβ17Nov 4, 2025Updated 4 months ago
- Vinci: A Real-time Embodied Smart Assistant based on Egocentric Vision-Language Modelβ82Nov 27, 2025Updated 3 months ago
- Code for "Skill-based Chain-of-Thoughts for Domain-Adaptive Video Reasoning [EMNLP 2025 Finding]"β15Aug 27, 2025Updated 6 months ago
- β21Feb 13, 2026Updated 2 weeks ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignmentβ35Jul 1, 2024Updated last year
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistantβ399Mar 19, 2025Updated 11 months ago
- β16Sep 25, 2025Updated 5 months ago
- The official implement of "Grounded Chain-of-Thought for Multimodal Large Language Models"β21Jul 21, 2025Updated 7 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoningβ114Dec 24, 2025Updated 2 months ago
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)β693Sep 24, 2025Updated 5 months ago
- MR. Video: MapReduce is the Principle for Long Video Understandingβ30Apr 23, 2025Updated 10 months ago
- [NeurIPS 2024] Official code for HourVideo: 1-Hour Video Language Understandingβ160Jul 12, 2025Updated 7 months ago
- The first spoken long-text dataset derived from live streams, designed to reflect the redundancy-rich and conversational nature of real-wβ¦β12Jun 28, 2025Updated 8 months ago
- β98Jun 23, 2025Updated 8 months ago
- [ICML 2025] Streamline Without Sacrifice - Squeeze out Computation Redundancy in LMMβ20May 22, 2025Updated 9 months ago
- PhysGame Benchmark for Physical Commonsense Evaluation in Gameplay Videosβ48Jul 3, 2025Updated 8 months ago
- [CVPR 2026] SpatialScore: Towards Comprehensive Evaluation for Spatial Intelligenceβ63Jul 9, 2025Updated 7 months ago
- [LLaVA-Video-R1]β¨First Adaptation of R1 to LLaVA-Video (2025-03-18)β68May 9, 2025Updated 9 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoTβ125Jan 30, 2026Updated last month
- \infty-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidationβ19Feb 14, 2025Updated last year
- OmniGAIA: Towards Native Omni-Modal AI Agentsβ46Updated this week
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervisionβ72Jul 10, 2024Updated last year
- β156Oct 31, 2024Updated last year
- TStar is a unified temporal search framework for long-form video question answeringβ88Sep 2, 2025Updated 6 months ago
- Streaming Video Instruction Tuningβ45Feb 25, 2026Updated last week
- [Blog 1] Recording a bug of grpo_trainer in some R1 projectsβ22Feb 23, 2025Updated last year
- [ACL2025 Oral & Award] Evaluate Image/Video Generation like Humans - Fast, Explainable, Flexibleβ121Aug 10, 2025Updated 6 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]β21Feb 27, 2025Updated last year
- Code and data for paper "Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation".β23Oct 22, 2025Updated 4 months ago
- The official repo for LIFT: Language-Image Alignment with Fixed Text Encodersβ42Jun 10, 2025Updated 8 months ago
- Diffusion Powers Video Tokenizer for Comprehension and Generation (CVPR 2025)β86Feb 27, 2025Updated last year
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]β101Jul 28, 2025Updated 7 months ago
- [CVPR2025] Unveil Inversion and Invariance in Flow Transformer for Versatile Image Editingβ23Aug 23, 2025Updated 6 months ago
- A collection of awesome think with videos papers.β90Dec 1, 2025Updated 3 months ago
- Code for the Molmo2 Vision-Language Modelβ172Dec 16, 2025Updated 2 months ago
- [CVPR 2026] TimeLens: Rethinking Video Temporal Grounding with Multimodal LLMsβ105Updated this week
- β20May 11, 2025Updated 9 months ago