ChenYi99 / EgoPlan
☆65Updated last month
Alternatives and similar repositories for EgoPlan:
Users that are interested in EgoPlan are comparing it to the libraries listed below
- Egocentric Video Understanding Dataset (EVUD)☆24Updated 6 months ago
- [NeurIPS 2024] Official code for HourVideo: 1-Hour Video Language Understanding☆62Updated 3 weeks ago
- Latent Motion Token as the Bridging Language for Robot Manipulation☆67Updated last month
- ☆43Updated 9 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆55Updated last month
- Official implementation for CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆43Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆33Updated 3 months ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆27Updated 2 months ago
- Language Repository for Long Video Understanding☆31Updated 7 months ago
- Official Implementation of CAPEAM (ICCV'23)☆11Updated 2 months ago
- [NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"☆32Updated 2 months ago
- Official Implementation of ReALFRED (ECCV'24)☆32Updated 3 months ago
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆50Updated last week
- The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆43Updated last month
- ☆136Updated 3 months ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆81Updated 4 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆58Updated 6 months ago
- ☆25Updated last year
- ☆42Updated last month
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆57Updated 4 months ago
- [EMNLP 2024] A Video Chat Agent with Temporal Prior☆28Updated last month
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated 10 months ago
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆91Updated 3 months ago
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆39Updated 9 months ago
- Code for paper "Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning"☆23Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆97Updated 2 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆144Updated last week
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆59Updated 4 months ago
- VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆91Updated 6 months ago
- Awesome paper for multi-modal llm with grounding ability☆14Updated 5 months ago