open-compass / VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
☆2,197Updated this week
Alternatives and similar repositories for VLMEvalKit:
Users that are interested in VLMEvalKit are comparing it to the libraries listed below
- A fork to add multimodal model training to open-r1☆1,181Updated 2 months ago
- A Framework of Small-scale Large Multimodal Models☆796Updated 3 weeks ago
- ☆3,686Updated last month
- An open-source implementaion for fine-tuning Qwen2-VL and Qwen2.5-VL series by Alibaba Cloud.☆628Updated this week
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆1,968Updated this week
- Next-Token Prediction is All You Need☆2,076Updated last month
- Official repository of ’Visual-RFT: Visual Reinforcement Fine-Tuning’☆1,542Updated 3 weeks ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,808Updated 2 months ago
- Famous Vision Language Models and Their Architectures☆778Updated last month
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆2,339Updated this week
- A family of lightweight multimodal models.☆1,013Updated 5 months ago
- ☆349Updated 2 months ago
- VisionLLM Series☆1,041Updated last month
- LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆1,943Updated this week
- Witness the aha moment of VLM with less than $3.☆3,522Updated last month
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆647Updated last week
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,143Updated 2 months ago
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-bas…☆591Updated this week
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,886Updated 5 months ago
- MM-EUREKA: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning☆518Updated this week
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆708Updated this week
- Explore the Multimodal “Aha Moment” on 2B Model☆561Updated last month
- O1 Replication Journey☆1,983Updated 3 months ago
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆478Updated this week
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,348Updated 2 weeks ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆886Updated 3 weeks ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆863Updated last month
- R1-onevision, a visual language model capable of deep CoT reasoning.☆489Updated this week
- Mixture-of-Experts for Large Vision-Language Models☆2,145Updated 4 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆431Updated last week