deepseek-ai / ESFTLinks
Expert Specialized Fine-Tuning
☆721Updated 7 months ago
Alternatives and similar repositories for ESFT
Users that are interested in ESFT are comparing it to the libraries listed below
Sorting:
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,863Updated last year
- DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models☆3,118Updated last year
- ☆548Updated last year
- OLMoE: Open Mixture-of-Experts Language Models☆940Updated 3 months ago
- A curated list of open-source projects related to DeepSeek Coder☆743Updated last month
- Muon is Scalable for LLM Training☆1,397Updated 5 months ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆566Updated 8 months ago
- A project to improve skills of large language models☆734Updated this week
- An Open Large Reasoning Model for Real-World Solutions☆1,536Updated 7 months ago
- ☆1,377Updated 3 months ago
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆4,043Updated last year
- AllenAI's post-training codebase☆3,488Updated last week
- Scalable toolkit for efficient model reinforcement☆1,193Updated last week
- ☆817Updated 6 months ago
- Large Reasoning Models☆806Updated last year
- ☆1,344Updated last year
- [COLM 2025] LIMO: Less is More for Reasoning☆1,059Updated 5 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆753Updated 5 months ago
- Fully open data curation for reasoning models☆2,182Updated last month
- Recipes to scale inference-time compute of open models☆1,123Updated 7 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆814Updated 9 months ago
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆240Updated last week
- Seed-Coder is a family of lightweight open-source code LLMs comprising base, instruct and reasoning models, developed by ByteDance Seed.☆722Updated 7 months ago
- Evaluation suite for LLMs☆376Updated 5 months ago
- Arena-Hard-Auto: An automatic LLM benchmark.☆978Updated 6 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆465Updated 7 months ago
- A series of math-specific large language models of our Qwen2 series.☆1,058Updated 11 months ago
- ZeroSearch: Incentivize the Search Capability of LLMs without Searching☆1,216Updated 4 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆751Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,171Updated 3 months ago