open-compass / VLMEvalKit
Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks
☆1,689Updated this week
Alternatives and similar repositories for VLMEvalKit:
Users that are interested in VLMEvalKit are comparing it to the libraries listed below
- A Framework of Small-scale Large Multimodal Models☆709Updated 3 weeks ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,709Updated 3 weeks ago
- A family of lightweight multimodal models.☆972Updated last month
- ☆3,272Updated 3 months ago
- Next-Token Prediction is All You Need☆1,965Updated 2 months ago
- Mixture-of-Experts for Large Vision-Language Models☆2,051Updated last month
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆554Updated 3 weeks ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆771Updated 9 months ago
- ☆756Updated 6 months ago
- 🔥🔥🔥Latest Papers, Codes and Datasets on Vid-LLMs.☆1,820Updated this week
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆578Updated this week
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆1,987Updated this week
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆758Updated 5 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,013Updated last week
- Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,129Updated 3 weeks ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆708Updated 5 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,106Updated 9 months ago
- Famous Vision Language Models and Their Architectures☆565Updated 4 months ago
- Collection of AWESOME vision-language models for vision tasks☆2,412Updated last month
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆720Updated 11 months ago
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆732Updated 2 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,673Updated 3 months ago
- VisionLLM Series☆977Updated 2 weeks ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆908Updated last month
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer☆348Updated this week
- O1 Replication Journey: A Strategic Progress Report – Part I☆1,861Updated this week
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆482Updated 8 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,083Updated last year
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,204Updated 4 months ago
- ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.☆627Updated 3 weeks ago