vwxyzjn / summarize_from_feedback_detailsLinks
☆160Updated last year
Alternatives and similar repositories for summarize_from_feedback_details
Users that are interested in summarize_from_feedback_details are comparing it to the libraries listed below
Sorting:
- RLHF implementation details of OAI's 2019 codebase☆197Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆199Updated 2 years ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆183Updated 7 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆114Updated 2 months ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆114Updated last year
- ☆100Updated last year
- ☆116Updated 11 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆32Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆199Updated 8 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 3 months ago
- ☆60Updated 7 months ago
- Directional Preference Alignment☆58Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆31Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆89Updated 3 years ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 5 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆149Updated 10 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆142Updated 10 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆115Updated 2 years ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆70Updated 10 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆65Updated 10 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆146Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆203Updated last year
- ☆34Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆59Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 11 months ago
- GenRM-CoT: Data release for verification rationales☆67Updated last year