tianjunz / TEMPERA
☆44Updated 2 years ago
Alternatives and similar repositories for TEMPERA
Users that are interested in TEMPERA are comparing it to the libraries listed below
Sorting:
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆24Updated 9 months ago
- ☆40Updated last year
- Directional Preference Alignment☆57Updated 7 months ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆55Updated last year
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- ☆25Updated 11 months ago
- ☆49Updated last year
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆25Updated 3 months ago
- ☆12Updated 4 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated last month
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆43Updated 6 months ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆63Updated 9 months ago
- Self-Supervised Alignment with Mutual Information☆18Updated 11 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 6 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated last year
- The code of paper "Toward Optimal LLM Alignments Using Two-Player Games".☆16Updated 10 months ago
- ☆22Updated 2 months ago
- Active Example Selection for In-Context Learning (EMNLP'22)☆49Updated 9 months ago
- Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs☆36Updated last year
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆28Updated last month
- ☆36Updated 7 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆58Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 8 months ago
- This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting☆18Updated 9 months ago
- ☆25Updated last year
- Code for "A Sober Look at Progress in Language Model Reasoning" paper☆45Updated this week
- Rewarded soups official implementation☆57Updated last year
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year