MME-Benchmarks / Video-MMELinks
✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis
☆611Updated 2 months ago
Alternatives and similar repositories for Video-MME
Users that are interested in Video-MME are comparing it to the libraries listed below
Sorting:
- Long Context Transfer from Language to Vision☆385Updated 4 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆642Updated this week
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆384Updated 2 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆434Updated 3 months ago
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆449Updated last month
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆635Updated 6 months ago
- 🔥🔥MLVU: Multi-task Long Video Understanding Benchmark☆214Updated last month
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆351Updated 5 months ago
- [ICML 2025] Official PyTorch implementation of LongVU☆392Updated 2 months ago
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆235Updated 10 months ago
- Official repository for the paper PLLaVA☆663Updated last year
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆281Updated last year
- 🔥🔥First-ever hour scale video understanding models☆506Updated 2 weeks ago
- ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models☆639Updated 7 months ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆320Updated last year
- Awesome papers & datasets specifically focused on long-term videos.☆283Updated 8 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆604Updated 4 months ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆279Updated 2 weeks ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆461Updated 6 months ago
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆332Updated 8 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆392Updated 2 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆216Updated 2 weeks ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆313Updated 5 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆345Updated 6 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆382Updated 3 months ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆347Updated this week
- Frontier Multimodal Foundation Models for Image and Video Understanding☆911Updated 2 months ago
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆652Updated 2 weeks ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆824Updated last year
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆558Updated 9 months ago