FreedomIntelligence / TinyDeepSeek
Reproduction of the complete process of DeepSeek-R1 on small-scale models, including Pre-training, SFT, and RL.
☆22Updated last month
Alternatives and similar repositories for TinyDeepSeek:
Users that are interested in TinyDeepSeek are comparing it to the libraries listed below
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆50Updated this week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆65Updated 2 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated 11 months ago
- ☆22Updated last month
- qwen-nsa☆57Updated 2 weeks ago
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆35Updated 11 months ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆70Updated this week
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated 10 months ago
- ☆63Updated 5 months ago
- ☆76Updated 2 weeks ago
- this is an implementation for the paper Improve Mathematical Reasoning in Language Models by Automated Process Supervision from google de…☆28Updated 3 weeks ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆75Updated last week
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆104Updated 4 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆133Updated last month
- ☆55Updated 6 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆91Updated 3 weeks ago
- Agentic RAG R1 Framework via Reinforcement Learning☆30Updated last week
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆42Updated 6 months ago
- ☆60Updated this week
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆46Updated 8 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆173Updated last month
- Efficient Mixture of Experts for LLM Paper List☆61Updated 4 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆115Updated 2 weeks ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆87Updated 2 months ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆74Updated last month
- ☆16Updated 2 weeks ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆61Updated 6 months ago
- ☆74Updated last week
- AIMO2 2nd place solution☆49Updated last week
- PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing☆16Updated last month