chaxjli / U-MARVELLinks
☆25Updated 4 months ago
Alternatives and similar repositories for U-MARVEL
Users that are interested in U-MARVEL are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆171Updated 5 months ago
- Official repository of MMDU dataset☆98Updated last year
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆25Updated 8 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆59Updated 6 months ago
- R1-like Video-LLM for Temporal Grounding☆126Updated 5 months ago
- ☆155Updated last year
- ☆25Updated last year
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆76Updated last year
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆45Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆125Updated 8 months ago
- ☆37Updated last year
- ☆80Updated last year
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆129Updated 4 months ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆73Updated 6 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆112Updated last year
- [ICCV 2025] LVBench: An Extreme Long Video Understanding Benchmark☆128Updated 5 months ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆59Updated last year
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆39Updated 5 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆141Updated 3 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Updated last year
- The official implementation of RAR☆93Updated this week
- MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning☆136Updated 2 months ago
- ☆140Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆51Updated 9 months ago
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆32Updated 4 months ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆104Updated last year
- 🔥🔥MLVU: Multi-task Long Video Understanding Benchmark☆235Updated 3 months ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆37Updated last year
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆71Updated 10 months ago
- A collection of visual instruction tuning datasets.☆76Updated last year