liziniu / ReMaxLinks
Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)
☆199Updated 2 years ago
Alternatives and similar repositories for ReMax
Users that are interested in ReMax are comparing it to the libraries listed below
Sorting:
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆151Updated 11 months ago
- ☆215Updated 11 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆202Updated 9 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆96Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆329Updated 2 weeks ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆269Updated last year
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆115Updated 2 years ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆38Updated 7 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆249Updated 9 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆419Updated 7 months ago
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆58Updated last year
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆154Updated 3 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 10 months ago
- ☆224Updated 10 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆150Updated last year
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆116Updated 6 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆55Updated last year
- ☆160Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 10 months ago
- GenRM-CoT: Data release for verification rationales☆68Updated last year
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆273Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆39Updated last year
- ☆78Updated last year
- RLHF implementation details of OAI's 2019 codebase☆197Updated 2 years ago
- Deepseek R1 zero tiny version own reproduce on two A100s.☆84Updated last year
- ☆117Updated last year
- AI Alignment: A Comprehensive Survey☆136Updated 2 years ago