A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision, llama-3.2-vision, qwen-vl, qwen2-vl, phi3-v etc.
☆370Feb 28, 2026Updated last month
Alternatives and similar repositories for lmms-finetune
Users that are interested in lmms-finetune are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A Framework of Small-scale Large Multimodal Models☆973Mar 29, 2026Updated last week
- ☆4,624Sep 14, 2025Updated 6 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆183Oct 14, 2024Updated last year
- An open-source implementation for training LLaVA-NeXT.☆436Oct 23, 2024Updated last year
- When do we not need larger vision models?☆418Feb 8, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆179Jul 7, 2025Updated 9 months ago
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆670Mar 10, 2025Updated last year
- ☆390Feb 8, 2025Updated last year
- Universal Video Temporal Grounding with Generative Multi-modal Large Language Models☆50Mar 20, 2026Updated 3 weeks ago
- [ICCV25 Highlight] The official implementation of the paper "LEGION: Learning to Ground and Explain for Synthetic Image Detection"☆76Oct 22, 2025Updated 5 months ago
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆846Aug 5, 2025Updated 8 months ago
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, …☆13,516Apr 3, 2026Updated last week
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks☆3,977Updated this week
- [NeurIPS 2025 🔥] FakeVLM: Advancing Synthetic Image Detection through Explainable Multimodal Models and Fine-Grained Artifact Analysis☆128Sep 24, 2025Updated 6 months ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- Aligning LMMs with Factually Augmented RLHF☆394Nov 1, 2023Updated 2 years ago
- A fork to add multimodal model training to open-r1☆1,520Feb 8, 2025Updated last year
- An open-source implementaion for fine-tuning Qwen-VL series by Alibaba Cloud.☆1,796Mar 25, 2026Updated 2 weeks ago
- Visualizing the attention of vision-language models☆291Feb 28, 2025Updated last year
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,949Sep 22, 2025Updated 6 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,652Aug 12, 2024Updated last year
- [NAACL 2025 Oral] From redundancy to relevance: Enhancing explainability in multimodal large language models☆129Jan 30, 2026Updated 2 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,131Dec 12, 2025Updated 3 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆165Mar 8, 2026Updated last month
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆4,013Updated this week
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆55Mar 9, 2025Updated last year
- ☆17Apr 23, 2025Updated 11 months ago
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,118Updated this week
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆1,444Feb 11, 2026Updated last month
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆281Jun 25, 2024Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024 Best Paper]☆239Jan 3, 2026Updated 3 months ago
- Official repository for the paper PLLaVA☆675Jul 28, 2024Updated last year
- The official pytorch implementation of Exploring the Interactive Guidance for Unified and Effective Image Matting [TOMM 2025]☆25Nov 24, 2025Updated 4 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [ICML 2025] Official PyTorch implementation of LongVU☆425May 8, 2025Updated 11 months ago
- A SOTA vision model built on top of llama3 8B.☆14May 28, 2024Updated last year
- LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft☆46Jul 17, 2024Updated last year
- Latest Advances on Multimodal Large Language Models☆17,568Apr 3, 2026Updated last week
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,995Nov 7, 2025Updated 5 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,787Mar 12, 2026Updated 3 weeks ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆115Dec 24, 2025Updated 3 months ago