FreedomIntelligence / TinyDeepSeekLinks
Reproduction of the complete process of DeepSeek-R1 on small-scale models, including Pre-training, SFT, and RL.
☆29Updated 9 months ago
Alternatives and similar repositories for TinyDeepSeek
Users that are interested in TinyDeepSeek are comparing it to the libraries listed below
Sorting:
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆88Updated 10 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆197Updated 3 weeks ago
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆82Updated this week
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆143Updated last month
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆72Updated 9 months ago
- ☆65Updated last year
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆67Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆150Updated 5 months ago
- [AAAI 2026] Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆92Updated last month
- ☆321Updated 7 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆98Updated 10 months ago
- ☆49Updated this week
- ☆126Updated 6 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 4 months ago
- ☆114Updated 3 months ago
- Extrapolating RLVR to General Domains without Verifiers☆184Updated 4 months ago
- [ICML'25] Official code of paper "Fast Large Language Model Collaborative Decoding via Speculation"☆28Updated 6 months ago
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆56Updated 6 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆86Updated 6 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆138Updated last year
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆86Updated 9 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆148Updated 2 months ago
- ☆72Updated 8 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆91Updated last year
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆258Updated 7 months ago
- ☆54Updated 5 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆285Updated last month
- Towards a Unified View of Large Language Model Post-Training☆195Updated 3 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆71Updated 8 months ago
- One-shot Entropy Minimization☆187Updated 6 months ago