jackfsuia / nanoRLHFLinks
RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.
☆67Updated 5 months ago
Alternatives and similar repositories for nanoRLHF
Users that are interested in nanoRLHF are comparing it to the libraries listed below
Sorting:
- ☆65Updated 8 months ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆85Updated 4 months ago
- On Memorization of Large Language Models in Logical Reasoning☆70Updated 4 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- ☆103Updated 3 weeks ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆204Updated this week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆245Updated 3 months ago
- ☆33Updated 10 months ago
- ☆20Updated 3 months ago
- ☆95Updated 7 months ago
- Async pipelined version of Verl☆112Updated 3 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆253Updated 7 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆200Updated last week
- ☆70Updated last week
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 9 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆268Updated 10 months ago
- ☆107Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆189Updated last year
- Counting-Stars (★)☆83Updated 2 months ago
- ☆103Updated 8 months ago
- ☆147Updated 8 months ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆146Updated 7 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆133Updated last year
- ☆83Updated last year
- ☆129Updated last year
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆56Updated 2 years ago
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆36Updated last year
- Fantastic Data Engineering for Large Language Models☆89Updated 7 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆240Updated 5 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year