XU-YIJIE / grpo-flatLinks
Train your grpo with zero dataset and low resources, 8bit/4bit/lora/qlora supported, multi-gpu supported ...
☆79Updated 9 months ago
Alternatives and similar repositories for grpo-flat
Users that are interested in grpo-flat are comparing it to the libraries listed below
Sorting:
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆283Updated 11 months ago
- 在verl上做reward的定制开发☆144Updated 8 months ago
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆99Updated last year
- 怎么训练一个LLM分词器☆153Updated 2 years ago
- 使用单个24G显卡,从0开始训练LLM☆56Updated 6 months ago
- llm & rl☆271Updated 3 months ago
- ☆120Updated last year
- pytorch分布式训练☆73Updated 2 years ago
- 一些 LLM 方面的从零复现笔记☆243Updated 9 months ago
- The related works and background techniques about Openai o1☆221Updated last year
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆79Updated 11 months ago
- ☆552Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆117Updated 2 years ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆155Updated last year
- ☆41Updated 10 months ago
- ☆48Updated 11 months ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆94Updated 2 months ago
- ☆163Updated last year
- a-m-team's exploration in large language modeling☆195Updated 8 months ago
- ☆136Updated last year
- ☆115Updated last year
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆67Updated 11 months ago
- ☆85Updated last year
- ☆47Updated last year
- ☆36Updated last year
- Reinforcement Learning in LLM and NLP.☆62Updated last month
- ChatGLM-6B添加了RLHF的实现,以及部分核心代码的逐行讲解 ,实例部分是做了个新闻短标题的生成,以及指定context推荐的RLHF的实现☆88Updated 2 years ago
- 基于DPO算法微调语言大模型,简单好上手。☆50Updated last year
- Full stack LLM (Pre-training/finetuning, PPO(RLHF), Inference, Quant, etc.)☆30Updated 11 months ago
- 通义千问的DPO训练☆61Updated last year