lmarena / PPELinks
☆62Updated 8 months ago
Alternatives and similar repositories for PPE
Users that are interested in PPE are comparing it to the libraries listed below
Sorting:
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆136Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆183Updated 8 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 4 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- Critique-out-Loud Reward Models☆73Updated last year
- ☆72Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆132Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 8 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆58Updated last year
- ☆102Updated 2 years ago
- ☆47Updated 10 months ago
- ☆107Updated last year
- ☆41Updated 2 years ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆81Updated 2 years ago
- GenRM-CoT: Data release for verification rationales☆68Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆205Updated last year
- ☆103Updated 2 years ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆120Updated last year
- ☆80Updated 10 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆77Updated 3 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆71Updated 11 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆116Updated 2 years ago
- ☆109Updated 6 months ago
- ☆99Updated last year