open-compass / VLMEvalKitLinks
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
☆2,892Updated this week
Alternatives and similar repositories for VLMEvalKit
Users that are interested in VLMEvalKit are comparing it to the libraries listed below
Sorting:
- ☆4,115Updated 2 months ago
- A fork to add multimodal model training to open-r1☆1,368Updated 6 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆3,347Updated this week
- An open-source implementaion for fine-tuning Qwen2-VL and Qwen2.5-VL series by Alibaba Cloud.☆1,045Updated this week
- A Framework of Small-scale Large Multimodal Models☆871Updated 3 months ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,883Updated 2 months ago
- Famous Vision Language Models and Their Architectures☆979Updated 5 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,050Updated 3 weeks ago
- A family of lightweight multimodal models.☆1,027Updated 9 months ago
- Next-Token Prediction is All You Need☆2,178Updated 5 months ago
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-bas…☆1,091Updated this week
- Witness the aha moment of VLM with less than $3.☆3,902Updated 2 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,205Updated 6 months ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,216Updated last month
- Solve Visual Understanding with Reinforced VLMs☆5,456Updated last month
- 🔥🔥🔥 [TCSVT 2025] Latest Papers, Codes and Datasets on Vid-LLMs.☆2,629Updated last week
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆800Updated 2 weeks ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆1,044Updated this week
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆808Updated 3 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆820Updated last year
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆668Updated last month
- Official Repo for Open-Reasoner-Zero☆2,022Updated 2 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,032Updated last month
- ☆366Updated 6 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆757Updated last month
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,482Updated last week
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Asy…☆7,653Updated this week
- VisionLLM Series☆1,098Updated 5 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,935Updated 9 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,252Updated last year