PolyU-ChenLab / ETBenchLinks
👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)
☆70Updated 9 months ago
Alternatives and similar repositories for ETBench
Users that are interested in ETBench are comparing it to the libraries listed below
Sorting:
- ☆80Updated 11 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆64Updated last year
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆125Updated 3 months ago
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆44Updated last year
- FreeVA: Offline MLLM as Training-Free Video Assistant☆64Updated last year
- [ACL 2025] PruneVid: Visual Token Pruning for Efficient Video Large Language Models☆55Updated 6 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆57Updated 5 months ago
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆27Updated 5 months ago
- Official repository for "IntentQA: Context-aware Video Intent Reasoning" from ICCV 2023.☆21Updated 11 months ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆101Updated 11 months ago
- [CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"☆35Updated last year
- ☆36Updated last year
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆61Updated last year
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆48Updated 7 months ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆140Updated 10 months ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆37Updated last year
- ☆26Updated 7 months ago
- [NeurIPS'25] Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding☆59Updated 3 weeks ago
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆31Updated last year
- Egocentric Video Understanding Dataset (EVUD)☆32Updated last year
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆86Updated last year
- [EMNLP 2025 Findings] Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆134Updated 2 months ago
- ☆107Updated last year
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆76Updated last year
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year
- [ICLR2025] Official code implementation of Video-UTR: Unhackable Temporal Rewarding for Scalable Video MLLMs☆61Updated 8 months ago
- ☆32Updated last year
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆70Updated 2 months ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆77Updated 4 months ago