DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
☆1,909Jan 16, 2024Updated 2 years ago
Alternatives and similar repositories for DeepSeek-MoE
Users that are interested in DeepSeek-MoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model☆5,006Sep 25, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,675Mar 8, 2024Updated 2 years ago
- DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models☆3,235Apr 15, 2024Updated 2 years ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,000Dec 6, 2024Updated last year
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆4,087Apr 24, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A curated list of open-source projects related to DeepSeek Coder☆773Nov 11, 2025Updated 5 months ago
- ☆560Aug 16, 2024Updated last year
- DeepSeek LLM: Let there be answers☆6,809Feb 4, 2024Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,244Aug 14, 2025Updated 8 months ago
- Fast and memory-efficient exact attention☆23,344Updated this week
- Ongoing research training transformer models at scale☆16,073Updated this week
- DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding☆5,262Feb 26, 2025Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆497Mar 19, 2024Updated 2 years ago
- Expert Specialized Fine-Tuning☆733May 22, 2025Updated 10 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,340Updated this week
- Train transformer language models with reinforcement learning.☆18,054Updated this week
- Modeling, training, eval, and inference code for OLMo☆6,477Nov 24, 2025Updated 4 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,329Mar 6, 2025Updated last year
- Tools for merging pretrained large language models.☆6,973Mar 15, 2026Updated last month
- AllenAI's post-training codebase☆3,683Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆20,603Apr 10, 2026Updated last week
- Minimalistic large language model 3D-parallelism training☆2,654Apr 7, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- DeepEP: an efficient expert-parallel communication library☆9,131Updated this week
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆12,558Apr 7, 2026Updated last week
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆6,376Updated this week
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,866Updated this week
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,694Aug 14, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,211Jul 11, 2024Updated last year
- Expert Parallelism Load Balancer☆1,358Mar 24, 2025Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,483Oct 31, 2023Updated 2 years ago
- OLMoE: Open Mixture-of-Experts Language Models☆1,007Sep 23, 2025Updated 6 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76,536Updated this week
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.☆2,943Jan 14, 2026Updated 3 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆26,025Updated this week
- Official inference library for Mistral models☆10,763Feb 26, 2026Updated last month
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,929Apr 10, 2026Updated last week
- A fast MoE impl for PyTorch☆1,849Feb 10, 2025Updated last year
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,976May 15, 2025Updated 11 months ago