QwenLM / Qwen3-VLLinks
Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
☆17,662Updated last week
Alternatives and similar repositories for Qwen3-VL
Users that are interested in Qwen3-VL are comparing it to the libraries listed below
Sorting:
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆26,044Updated 3 months ago
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,692Updated 3 months ago
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,464Updated last year
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3, Qwen3-MoE, DeepSeek-R1, GLM4.5, InternLM3, Llama4, ...) and 300+ MLLMs (…☆12,112Updated this week
- ☆4,496Updated 3 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆22,190Updated last week
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,868Updated 7 months ago
- Janus-Series: Unified Multimodal Understanding and Generation Models☆17,668Updated 11 months ago
- DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding☆5,176Updated 10 months ago
- The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.☆20,113Updated last month
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,713Updated last year
- Solve Visual Understanding with Reinforced VLMs☆5,797Updated 2 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,718Updated last month
- Fully open reproduction of DeepSeek-R1☆25,805Updated last month
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,428Updated 10 months ago
- Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.☆12,863Updated 3 months ago
- GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型☆7,017Updated 6 months ago
- DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model☆4,985Updated last year
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,491Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,274Updated last year
- Witness the aha moment of VLM with less than $3.☆4,016Updated 7 months ago
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆11,964Updated 3 weeks ago
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆65,399Updated this week
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆4,043Updated last year
- verl: Volcano Engine Reinforcement Learning for LLMs☆18,123Updated this week
- s1: Simple test-time scaling☆6,625Updated 6 months ago
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,673Updated this week
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,110Updated last month
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,047Updated this week
- Fast and memory-efficient exact attention☆21,516Updated this week