waylandzhang / DeepSeek-RL-Qwen-0.5B-GRPO-gsm8kLinks
☆85Updated 10 months ago
Alternatives and similar repositories for DeepSeek-RL-Qwen-0.5B-GRPO-gsm8k
Users that are interested in DeepSeek-RL-Qwen-0.5B-GRPO-gsm8k are comparing it to the libraries listed below
Sorting:
- 一些 LLM 方面的从零复现笔记☆241Updated 8 months ago
- 对llama3进行全参微调、lora微调以及qlora微调。☆212Updated last year
- llm & rl☆265Updated 2 months ago
- 使用单个24G显卡,从0开始训练LLM☆56Updated 5 months ago
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆98Updated last year
- 通义千问的DPO 训练☆61Updated last year
- 在verl上做reward的定制开发☆138Updated 7 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆483Updated 7 months ago
- ☆119Updated last year
- ☆76Updated 2 years ago
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混合专家模型、微调训练、强化学习、天池数据竞赛☆74Updated 10 months ago
- ☆268Updated last year
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆358Updated last year
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆214Updated last year
- ☆115Updated last year
- DeepSeek 系列工作解读、扩展和复现。☆697Updated 9 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆778Updated 8 months ago
- ☆756Updated last week
- 阿里通义千问(Qwen-7B-Chat/Qwen-7B), 微调/LORA/推理☆132Updated last year
- ☆170Updated last year
- 从0开始,将chatgpt的技术路线跑一遍。☆269Updated last year
- minimal-cost for training 0.5B R1-Zero☆796Updated 7 months ago
- 快速入门RAG与私有化部署☆211Updated last year
- personal chatgpt☆402Updated last year
- ☆81Updated last month
- ☆107Updated 6 months ago
- ☆32Updated last year
- a toolkit on knowledge distillation for large language models☆229Updated this week
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆69Updated last year
- 大型语言模型实战指南:应用实践与场景落地☆83Updated last year