Rh-Dang / ECBenchLinks
A Holistic Embodied Cognition Benchmark
☆17Updated 6 months ago
Alternatives and similar repositories for ECBench
Users that are interested in ECBench are comparing it to the libraries listed below
Sorting:
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆31Updated 3 weeks ago
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆77Updated 7 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆48Updated 3 months ago
- Official PyTorch Code of ReKV (ICLR'25)☆57Updated 7 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆86Updated last year
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆45Updated 4 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆73Updated 10 months ago
- ☆33Updated 11 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆70Updated last year
- Official implementation of paper VideoLLM Knows When to Speak: Enhancing Time-Sensitive Video Comprehension with Video-Text Duet Interact…☆34Updated 8 months ago
- Egocentric Video Understanding Dataset (EVUD)☆31Updated last year
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆30Updated 9 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆48Updated 7 months ago
- ☆96Updated 2 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆120Updated 2 months ago
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆57Updated last month
- [NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"☆39Updated 11 months ago
- ☆60Updated last month
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated last month
- The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning".☆140Updated last month
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆40Updated 6 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆66Updated 4 months ago
- ☆45Updated 9 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆73Updated 5 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆77Updated 7 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆74Updated 3 months ago
- ☆94Updated 9 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆40Updated 3 months ago