jackfsuia / nanoRLHFLinks
RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.
☆68Updated 6 months ago
Alternatives and similar repositories for nanoRLHF
Users that are interested in nanoRLHF are comparing it to the libraries listed below
Sorting:
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆86Updated 5 months ago
- ☆65Updated 9 months ago
- ☆129Updated last year
- On Memorization of Large Language Models in Logical Reasoning☆71Updated 5 months ago
- ☆74Updated last week
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆148Updated 8 months ago
- ☆103Updated 8 months ago
- ☆96Updated 8 months ago
- A research repo for experiments about Reinforcement Finetuning☆51Updated 4 months ago
- ☆33Updated 11 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆64Updated 10 months ago
- Fantastic Data Engineering for Large Language Models☆90Updated 8 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆253Updated 8 months ago
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆181Updated 3 weeks ago
- ☆109Updated last year
- ☆104Updated last month
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆209Updated last month
- Deepseek R1 zero tiny version own reproduce on two A100s.☆71Updated 6 months ago
- ☆83Updated last year
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆128Updated 4 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆267Updated 11 months ago
- Counting-Stars (★)☆83Updated 2 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆259Updated last week
- ☆206Updated 6 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆244Updated 4 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆191Updated last year
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆136Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆114Updated 2 years ago
- ☆20Updated 4 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆146Updated 6 months ago