deepseek-ai / DeepSeek-MoE
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
☆1,651Updated last year
Alternatives and similar repositories for DeepSeek-MoE:
Users that are interested in DeepSeek-MoE are comparing it to the libraries listed below
- Expert Specialized Fine-Tuning☆600Updated 6 months ago
- Scalable RL solution for advanced reasoning of language models☆1,478Updated last month
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆1,968Updated this week
- DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models☆2,627Updated last year
- Official Repo for Open-Reasoner-Zero☆1,850Updated last week
- DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model☆4,870Updated 6 months ago
- Muon is Scalable for LLM Training☆1,020Updated 3 weeks ago
- Large Reasoning Models☆802Updated 4 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,484Updated last month
- Reproduce R1 Zero on Logic Puzzle☆2,287Updated 3 weeks ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆2,514Updated last week
- Simple RL training for reasoning☆3,435Updated last week
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,733Updated 2 weeks ago
- Distributed RL System for LLM Reasoning☆1,079Updated last week
- verl: Volcano Engine Reinforcement Learning for LLMs☆6,699Updated this week
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,508Updated last year
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,099Updated last week
- Fully open data curation for reasoning models☆1,697Updated last week
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆3,777Updated 11 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,169Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,092Updated last week
- ☆492Updated 8 months ago
- Democratizing Reinforcement Learning for LLMs☆2,976Updated last week
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆955Updated 4 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,756Updated 3 months ago
- AllenAI's post-training codebase☆2,898Updated this week
- OLMoE: Open Mixture-of-Experts Language Models☆713Updated last month
- O1 Replication Journey☆1,983Updated 3 months ago
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆969Updated last week
- Analyze computation-communication overlap in V3/R1.☆991Updated 3 weeks ago