DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
☆4,994Sep 25, 2024Updated last year
Alternatives and similar repositories for DeepSeek-V2
Users that are interested in DeepSeek-V2 are comparing it to the libraries listed below
Sorting:
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆4,076Apr 24, 2024Updated last year
- DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence☆6,511Nov 11, 2025Updated 3 months ago
- DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models☆3,183Apr 15, 2024Updated last year
- DeepSeek LLM: Let there be answers☆6,748Feb 4, 2024Updated 2 years ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,895Jan 16, 2024Updated 2 years ago
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆26,852Jan 9, 2026Updated 2 months ago
- DeepSeek Coder: Let the Code Write Itself☆22,873Nov 11, 2025Updated 3 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,216Updated this week
- The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.☆20,566Jan 30, 2026Updated last month
- Fast and memory-efficient exact attention☆22,460Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,883Updated this week
- A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Multimodal Live Streaming on Your Phone☆24,027Feb 23, 2026Updated 2 weeks ago
- MiniCPM4 & MiniCPM4.1: Ultra-Efficient LLMs on End Devices, achieving 3+ generation speedup on reasoning tasks☆8,695Feb 11, 2026Updated 3 weeks ago
- Expert Specialized Fine-Tuning☆731May 22, 2025Updated 9 months ago
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆67,966Updated this week
- Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.☆15,126Updated this week
- A series of large language models trained from scratch by developers @01-ai☆7,843Nov 27, 2024Updated last year
- Ongoing research training transformer models at scale☆15,535Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,759Updated this week
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,159Oct 30, 2025Updated 4 months ago
- The official Meta Llama 3 GitHub site☆29,277Jan 26, 2025Updated last year
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,854Sep 22, 2025Updated 5 months ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,426Jun 2, 2025Updated 9 months ago
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,545Aug 7, 2024Updated last year
- Janus-Series: Unified Multimodal Understanding and Generation Models☆17,710Feb 1, 2025Updated last year
- Open-Sora: Democratizing Efficient Video Production for All☆28,658Apr 30, 2025Updated 10 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,645Updated this week
- DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding☆5,240Feb 26, 2025Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,500Aug 12, 2024Updated last year
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,519Mar 2, 2026Updated last week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,993Updated this week
- GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型☆7,066Jul 4, 2025Updated 8 months ago
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆18,505Jan 30, 2026Updated last month
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆12,511Feb 6, 2026Updated last month
- [ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior☆3,003Apr 22, 2025Updated 10 months ago
- Modeling, training, eval, and inference code for OLMo☆6,353Nov 24, 2025Updated 3 months ago
- Retrieval and Retrieval-augmented LLMs☆11,352Dec 15, 2025Updated 2 months ago
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,092Updated this week
- Making large AI models cheaper, faster and more accessible☆41,364Mar 2, 2026Updated last week