lmarena / PPE
☆38Updated 4 months ago
Alternatives and similar repositories for PPE:
Users that are interested in PPE are comparing it to the libraries listed below
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆44Updated last month
- ☆46Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆52Updated 11 months ago
- ☆95Updated 8 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆43Updated last year
- ☆48Updated 11 months ago
- Learning adapter weights from task descriptions☆16Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 6 months ago
- AI Logging for Interpretability and Explainability🔬☆108Updated 9 months ago
- ☆47Updated 7 months ago
- Directional Preference Alignment☆56Updated 6 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆79Updated 7 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆74Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆26Updated 6 months ago
- Self-Alignment with Principle-Following Reward Models☆156Updated last year
- Codebase for Instruction Following without Instruction Tuning☆33Updated 5 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆102Updated last month
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆66Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆119Updated 6 months ago
- ☆93Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆121Updated 8 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆52Updated 9 months ago
- Critique-out-Loud Reward Models☆55Updated 5 months ago
- Long Context Extension and Generalization in LLMs☆50Updated 6 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆61Updated 4 months ago
- CodeUltraFeedback: aligning large language models to coding preferences☆70Updated 8 months ago