JoeLeelyf / OVO-Bench
[CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?
☆53Updated last week
Alternatives and similar repositories for OVO-Bench:
Users that are interested in OVO-Bench are comparing it to the libraries listed below
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆70Updated this week
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆42Updated last month
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆83Updated 7 months ago
- Official implement of MIA-DPO☆54Updated 2 months ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆29Updated 5 months ago
- The official repository for paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆35Updated last month
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning☆54Updated this week
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆57Updated 2 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆50Updated last week
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆89Updated 2 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆46Updated last month
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆110Updated last week
- ☆60Updated 3 weeks ago
- ☆30Updated 8 months ago
- ☆66Updated last week
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated last month
- Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Prompting☆24Updated last week
- Envolving Temporal Reasoning Capability into LMMs via Temporal Consistent Reward☆30Updated 3 weeks ago
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆98Updated 3 weeks ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆64Updated 7 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆23Updated 3 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆59Updated 7 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆89Updated last month
- Unifying Visual Understanding and Generation with Dual Visual Vocabularies 🌈☆37Updated 3 weeks ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆38Updated last month
- R1-like Video-LLM for Temporal Grounding☆78Updated this week
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆112Updated 3 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆94Updated 8 months ago
- ☆30Updated 2 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆41Updated 2 months ago