princeton-pli / what-makes-good-rmLinks
What Makes a Reward Model a Good Teacher? An Optimization Perspective
☆31Updated last month
Alternatives and similar repositories for what-makes-good-rm
Users that are interested in what-makes-good-rm are comparing it to the libraries listed below
Sorting:
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆58Updated 2 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- A Sober Look at Language Model Reasoning☆52Updated last week
- Rewarded soups official implementation☆58Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆104Updated 2 months ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated 11 months ago
- ☆40Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆85Updated 7 months ago
- Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts☆16Updated last year
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆28Updated 4 months ago
- ☆19Updated 3 weeks ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆36Updated last week
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 4 months ago
- This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting☆19Updated 10 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆79Updated 9 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆35Updated 3 weeks ago
- Directional Preference Alignment☆56Updated 8 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆23Updated 7 months ago
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆18Updated 9 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆25Updated last month
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆32Updated 4 months ago
- ☆14Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated last month
- ☆15Updated 9 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆35Updated 7 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆31Updated 8 months ago
- ☆53Updated 7 months ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆24Updated 11 months ago
- ☆29Updated last year