A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.
☆1,439Feb 11, 2026Updated last month
Alternatives and similar repositories for Ovis
Users that are interested in Ovis are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,904Sep 22, 2025Updated 6 months ago
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,920Updated this week
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆18,671Jan 30, 2026Updated last month
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,558Jun 14, 2025Updated 9 months ago
- ☆4,607Sep 14, 2025Updated 6 months ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,923May 26, 2025Updated 9 months ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆330Jul 4, 2025Updated 8 months ago
- Solve Visual Understanding with Reinforced VLMs☆5,872Mar 12, 2026Updated last week
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,168Jul 15, 2025Updated 8 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,995Nov 7, 2025Updated 4 months ago
- An unified model that seamlessly integrates multimodal understanding, text-to-image generation, and image editing within a single powerfu…☆452Dec 2, 2025Updated 3 months ago
- ☆191Mar 13, 2026Updated last week
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,434Mar 3, 2025Updated last year
- Eagle: Frontier Vision-Language Models with Data-Centric Strategies☆934Oct 25, 2025Updated 5 months ago
- ✨✨[NeurIPS 2025] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction☆2,500Mar 28, 2025Updated 11 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,786Mar 12, 2026Updated last week
- A fork to add multimodal model training to open-r1☆1,507Feb 8, 2025Updated last year
- Witness the aha moment of VLM with less than $3.☆4,041May 19, 2025Updated 10 months ago
- Next-Token Prediction is All You Need☆2,374Jan 12, 2026Updated 2 months ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,903Jan 8, 2026Updated 2 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆577Apr 13, 2025Updated 11 months ago
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, …☆13,263Updated this week
- Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'’☆2,317Oct 29, 2025Updated 4 months ago
- When do we not need larger vision models?☆415Feb 8, 2025Updated last year
- A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Multimodal Live Streaming on Your Phone☆24,144Mar 7, 2026Updated 2 weeks ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,603Aug 12, 2024Updated last year
- Codebase for Aria - an Open Multimodal Native MoE☆1,086Jan 22, 2025Updated last year
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMs☆417Dec 20, 2025Updated 3 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆237Nov 7, 2025Updated 4 months ago
- Official implementation of BLIP3o-Series☆1,653Nov 29, 2025Updated 3 months ago
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆388Jun 13, 2025Updated 9 months ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆846May 14, 2025Updated 10 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆182Oct 14, 2024Updated last year
- mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding☆2,375May 30, 2025Updated 9 months ago
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,583Aug 7, 2024Updated last year
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆4,085Apr 24, 2024Updated last year
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,410Aug 4, 2025Updated 7 months ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,539Apr 2, 2025Updated 11 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆368Jul 24, 2025Updated 8 months ago