QwenLM / Qwen3-VLLinks
Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
☆15,819Updated last week
Alternatives and similar repositories for Qwen3-VL
Users that are interested in Qwen3-VL are comparing it to the libraries listed below
Sorting:
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,420Updated last month
- Solve Visual Understanding with Reinforced VLMs☆5,671Updated 2 weeks ago
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,760Updated 4 months ago
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 500+ LLMs (Qwen3, Qwen3-MoE, Llama4, GLM4.5, InternLM3, DeepSeek-R1, ...) and 200+ MLLMs (…☆10,876Updated this week
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,335Updated last year
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆25,257Updated 3 weeks ago
- ☆4,364Updated last month
- verl: Volcano Engine Reinforcement Learning for LLMs☆15,194Updated this week
- Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.☆12,220Updated last month
- DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding☆5,109Updated 8 months ago
- Witness the aha moment of VLM with less than $3.☆3,972Updated 5 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,627Updated 2 weeks ago
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆4,002Updated last year
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,217Updated 4 months ago
- The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.☆19,635Updated last month
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,294Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆23,870Updated last year
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,420Updated 8 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,223Updated last week
- ☆3,466Updated 8 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆3,963Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆19,718Updated this week
- Janus-Series: Unified Multimodal Understanding and Generation Models☆17,604Updated 9 months ago
- GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型☆6,912Updated 4 months ago
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆4,963Updated this week
- Open-source unified multimodal model☆5,256Updated last week
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.☆47,705Updated last week
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,687Updated last year
- Fast and memory-efficient exact attention☆20,280Updated last week
- Democratizing Reinforcement Learning for LLMs☆4,662Updated this week