vwxyzjn / summarize_from_feedback_detailsLinks
☆152Updated 11 months ago
Alternatives and similar repositories for summarize_from_feedback_details
Users that are interested in summarize_from_feedback_details are comparing it to the libraries listed below
Sorting:
- RLHF implementation details of OAI's 2019 codebase☆193Updated last year
- ☆116Updated 9 months ago
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆87Updated 2 years ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆178Updated 5 months ago
- ☆100Updated last year
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆30Updated 10 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆194Updated last year
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆140Updated 8 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆31Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆196Updated 6 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆57Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆111Updated last week
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆148Updated 8 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆142Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆57Updated last year
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆112Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆64Updated 8 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆108Updated 3 months ago
- Self-Alignment with Principle-Following Reward Models☆169Updated last month
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- Directional Preference Alignment☆57Updated last year
- ☆55Updated 5 months ago
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆78Updated 9 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆110Updated 2 years ago
- RL Scaling and Test-Time Scaling (ICML'25)☆111Updated 9 months ago
- A brief and partial summary of RLHF algorithms.☆132Updated 7 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆164Updated 7 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆65Updated 8 months ago