FreedomIntelligence / TinyDeepSeekLinks
Reproduction of the complete process of DeepSeek-R1 on small-scale models, including Pre-training, SFT, and RL.
☆28Updated 8 months ago
Alternatives and similar repositories for TinyDeepSeek
Users that are interested in TinyDeepSeek are comparing it to the libraries listed below
Sorting:
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆87Updated 9 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆143Updated 4 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆275Updated 2 weeks ago
- ☆120Updated 5 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆80Updated 4 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆138Updated 6 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆364Updated last month
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆97Updated 8 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆69Updated 7 months ago
- ☆106Updated last month
- Efficient Mixture of Experts for LLM Paper List☆143Updated last month
- One-shot Entropy Minimization☆187Updated 5 months ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆69Updated 7 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆188Updated 4 months ago
- Extrapolating RLVR to General Domains without Verifiers☆178Updated 3 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆258Updated 6 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆66Updated last year
- ☆205Updated 2 weeks ago
- [ICML'25] Official code of paper "Fast Large Language Model Collaborative Decoding via Speculation"☆28Updated 4 months ago
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆76Updated this week
- ☆181Updated 5 months ago
- ☆65Updated 11 months ago
- repo for paper https://arxiv.org/abs/2504.13837☆217Updated 4 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆195Updated last year
- ☆212Updated 8 months ago
- Segment Policy Optimization: Effective Segment-Level Credit Assignment in RL for Large Language Models☆41Updated last month
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆88Updated 11 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆194Updated 3 months ago
- ☆309Updated 5 months ago
- 📖 This is a repository for organizing papers, codes, and other resources related to Latent Reasoning.☆269Updated last week