mkantwala / DeepSeek-R1-TrainingSuiteLinks
Advanced implementation of DeepSeek-R1 featuring Group Relative Policy Optimization (GRPO) for mathematical reasoning AI. Integrates safe distillation, modular reward systems, and efficient LoRA fine-tuning. Open-source Apache 2.0 licensed framework for developing aligned AI systems.
☆13Updated last year
Alternatives and similar repositories for DeepSeek-R1-TrainingSuite
Users that are interested in DeepSeek-R1-TrainingSuite are comparing it to the libraries listed below
Sorting:
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆40Updated 2 years ago
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆79Updated 11 months ago
- A repository aimed at pruning DeepSeek V3, R1 and R1-zero to a usable size☆83Updated 5 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- FuseAI Project☆87Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- An open source implementation of R1☆29Updated this week
- Music large model based on InternLM2-chat.☆23Updated last year
- Our 2nd-gen LMM☆34Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆223Updated 6 months ago
- Fast instruction tuning with Llama2☆11Updated last year
- Creating the DeepSeek V3 model from scratch☆24Updated 10 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆121Updated 8 months ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Updated last year
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- GLM Series Edge Models☆158Updated 8 months ago
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆40Updated last year
- Agentic Learning Powered by AWorld☆88Updated this week
- A light proxy solution for HuggingFace hub.☆49Updated 2 years ago
- ☆74Updated 8 months ago
- ☆19Updated last year
- ☆96Updated last year
- ☆87Updated 5 months ago
- ☆118Updated 8 months ago
- Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b Mo…☆27Updated last year
- Fused Qwen3 MoE layer for faster training, compatible with Transformers, LoRA, bnb 4-bit quant, Unsloth. Also possible to train LoRA over…☆231Updated last week
- ☆61Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- Exploration of the multi modal fuyu-8b model of Adept. 🤓 🔍☆27Updated 2 years ago
- [AAAI 2026] The Avengers: A Simple Recipe for Uniting Smaller Language Models to Challenge Proprietary Giants☆46Updated 2 months ago