Linear95 / SPAGLinks
Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024
☆133Updated 4 months ago
Alternatives and similar repositories for SPAG
Users that are interested in SPAG are comparing it to the libraries listed below
Sorting:
- Self-Alignment with Principle-Following Reward Models☆161Updated last month
- ☆97Updated 11 months ago
- ☆114Updated 5 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆138Updated 9 months ago
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆179Updated 2 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆54Updated last year
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆161Updated 3 weeks ago
- Critique-out-Loud Reward Models☆66Updated 8 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆106Updated 5 months ago
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated 10 months ago
- Reformatted Alignment☆113Updated 9 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆276Updated last year
- ☆121Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆159Updated 2 weeks ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 9 months ago
- ☆190Updated 2 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆64Updated last year
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆107Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆153Updated 3 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆144Updated 7 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆188Updated 11 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆318Updated 10 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated last month
- Benchmarking LLMs with Challenging Tasks from Real Users☆226Updated 7 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆109Updated this week
- ☆180Updated 2 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆184Updated this week
- ☆203Updated 4 months ago
- ☆142Updated 7 months ago