deepseek-ai / ESFTLinks
Expert Specialized Fine-Tuning
☆721Updated 7 months ago
Alternatives and similar repositories for ESFT
Users that are interested in ESFT are comparing it to the libraries listed below
Sorting:
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,863Updated last year
- DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models☆3,118Updated last year
- ☆548Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,536Updated 7 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆940Updated 3 months ago
- ☆1,377Updated 3 months ago
- Muon is Scalable for LLM Training☆1,397Updated 5 months ago
- ☆817Updated 6 months ago
- PyTorch building blocks for the OLMo ecosystem☆656Updated this week
- Large Reasoning Models☆806Updated last year
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆650Updated 9 months ago
- A curated list of open-source projects related to DeepSeek Coder☆743Updated last month
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆566Updated 8 months ago
- Scalable RL solution for advanced reasoning of language models☆1,790Updated 9 months ago
- A project to improve skills of large language models☆734Updated this week
- ☆1,344Updated last year
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,030Updated 9 months ago
- AllenAI's post-training codebase☆3,488Updated this week
- Arena-Hard-Auto: An automatic LLM benchmark.☆978Updated 6 months ago
- Scalable toolkit for efficient model reinforcement☆1,193Updated last week
- ☆969Updated 11 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,430Updated 4 months ago
- [COLM 2025] LIMO: Less is More for Reasoning☆1,059Updated 5 months ago
- Fully open data curation for reasoning models☆2,182Updated last month
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆4,037Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆276Updated 2 months ago
- An Open Source Toolkit For LLM Distillation☆817Updated 2 weeks ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆857Updated last week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆753Updated 5 months ago
- Recipes to scale inference-time compute of open models☆1,123Updated 7 months ago