lmarena / PPELinks
☆62Updated 8 months ago
Alternatives and similar repositories for PPE
Users that are interested in PPE are comparing it to the libraries listed below
Sorting:
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 4 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆136Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆58Updated last year
- ☆72Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆183Updated 8 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- ☆103Updated 2 years ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆132Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆71Updated 11 months ago
- ☆103Updated 2 years ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Updated last year
- Critique-out-Loud Reward Models☆73Updated last year
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆120Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆32Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆82Updated 2 years ago
- Directional Preference Alignment☆58Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆201Updated last month
- ☆85Updated last year
- ☆109Updated 6 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆110Updated 11 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆116Updated 2 years ago
- ☆41Updated 2 years ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 8 months ago
- GenRM-CoT: Data release for verification rationales☆67Updated last year
- ☆55Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆77Updated 3 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆134Updated last year
- Explore what LLMs are really leanring over SFT☆28Updated last year