🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)
☆848Aug 5, 2025Updated 7 months ago
Alternatives and similar repositories for LLaVA-pp
Users that are interested in LLaVA-pp are comparing it to the libraries listed below
Sorting:
- ☆4,582Sep 14, 2025Updated 5 months ago
- Official repository for the paper PLLaVA☆676Jul 28, 2024Updated last year
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,303Jul 15, 2025Updated 7 months ago
- ☆401Dec 12, 2024Updated last year
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,092Updated this week
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆945Aug 5, 2025Updated 7 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,988Nov 7, 2025Updated 4 months ago
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,854Sep 22, 2025Updated 5 months ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,452Dec 3, 2024Updated last year
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,922May 26, 2025Updated 9 months ago
- A family of lightweight multimodal models.☆1,051Nov 18, 2024Updated last year
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆336Jul 17, 2024Updated last year
- A Framework of Small-scale Large Multimodal Models☆963Feb 7, 2026Updated last month
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,335May 4, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,500Aug 12, 2024Updated last year
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆247Aug 14, 2024Updated last year
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,771Nov 28, 2025Updated 3 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,338Apr 15, 2024Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,768Jan 12, 2026Updated last month
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆281Jun 25, 2024Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆763Feb 1, 2024Updated 2 years ago
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,545Aug 7, 2024Updated last year
- An open-source implementation for training LLaVA-NeXT.☆436Oct 23, 2024Updated last year
- An Open-source Toolkit for LLM Development☆2,806Jan 13, 2025Updated last year
- When do we not need larger vision models?☆413Feb 8, 2025Updated last year
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,085Jul 29, 2024Updated last year
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,253Feb 16, 2025Updated last year
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,724May 29, 2024Updated last year
- [CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"☆836Aug 19, 2025Updated 6 months ago
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMs☆415Dec 20, 2025Updated 2 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆392Jul 9, 2024Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆691Jan 7, 2024Updated 2 years ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,937Aug 15, 2024Updated last year
- A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Multimodal Live Streaming on Your Phone☆24,027Feb 23, 2026Updated 2 weeks ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆203Jun 18, 2025Updated 8 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆860Jul 29, 2024Updated last year
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks☆3,830Updated this week
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,868Updated this week
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.☆65Oct 25, 2024Updated last year