vwxyzjn / summarize_from_feedback_details
☆137Updated 5 months ago
Alternatives and similar repositories for summarize_from_feedback_details:
Users that are interested in summarize_from_feedback_details are comparing it to the libraries listed below
- RLHF implementation details of OAI's 2019 codebase☆186Updated last year
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆151Updated 5 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆132Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 7 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆136Updated 2 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆53Updated 10 months ago
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆162Updated last week
- ☆96Updated 9 months ago
- ☆107Updated 3 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆181Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆143Updated last month
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆85Updated 2 years ago
- Directional Preference Alignment☆57Updated 7 months ago
- ☆187Updated 2 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆175Updated last month
- ☆149Updated 4 months ago
- A brief and partial summary of RLHF algorithms.☆127Updated last month
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆256Updated 7 months ago
- Async pipelined version of Verl☆60Updated 2 weeks ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆122Updated 9 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆55Updated 10 months ago
- ☆98Updated 6 months ago
- Self-Alignment with Principle-Following Reward Models☆160Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆54Updated 6 months ago
- ☆90Updated 9 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆101Updated 4 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆99Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆303Updated 8 months ago
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆101Updated 3 months ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆98Updated last year