deepseek-ai / DeepSeek-MoE
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
☆1,599Updated last year
Alternatives and similar repositories for DeepSeek-MoE:
Users that are interested in DeepSeek-MoE are comparing it to the libraries listed below
- Expert Specialized Fine-Tuning☆594Updated 6 months ago
- Muon is Scalable for LLM Training☆974Updated 3 weeks ago
- Official Repo for Open-Reasoner-Zero☆1,667Updated 2 weeks ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,687Updated 2 weeks ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆1,681Updated this week
- Large Reasoning Models☆799Updated 3 months ago
- DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model☆4,844Updated 6 months ago
- ☆910Updated 2 months ago
- Democratizing Reinforcement Learning for LLMs☆2,113Updated last month
- O1 Replication Journey☆1,977Updated 2 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,475Updated 3 weeks ago
- DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models☆2,566Updated 11 months ago
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆1,210Updated this week
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆767Updated this week
- OLMoE: Open Mixture-of-Experts Language Models☆690Updated last week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆851Updated last month
- This is a replicate of DeepSeek-R1-Zero and DeepSeek-R1 training on small models with limited data☆3,223Updated this week
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training.☆2,656Updated 2 weeks ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,732Updated 2 months ago
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆3,712Updated 11 months ago
- Analyze computation-communication overlap in V3/R1.☆957Updated this week
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,481Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆936Updated 3 months ago
- Scalable RL solution for advanced reasoning of language models☆1,419Updated last week
- Expert Parallelism Load Balancer☆1,099Updated this week
- ☆481Updated 7 months ago
- A curated list of open-source projects related to DeepSeek Coder☆652Updated 11 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆5,399Updated this week
- Reproduce R1 Zero on Logic Puzzle☆2,208Updated this week
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆944Updated last month