EvolvingLMMs-Lab / lmms-evalLinks
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
☆3,437Updated last week
Alternatives and similar repositories for lmms-eval
Users that are interested in lmms-eval are comparing it to the libraries listed below
Sorting:
- Align Anything: Training All-modality Model with Feedback☆4,613Updated last month
- Collection of AWESOME vision-language models for vision tasks☆3,039Updated 2 months ago
- Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models (CVPR 2024 Highlight)☆1,939Updated 2 months ago
- Uni-MoE: Lychee's Large Multimodal Model Family.☆1,052Updated this week
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,578Updated this week
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,835Updated this week
- Eagle: Frontier Vision-Language Models with Data-Centric Strategies☆912Updated 2 months ago
- A family of lightweight multimodal models.