tengxiao1 / SimPERLinks
SimPER: A Minimalist Approach to Preference Alignment without Hyperparameters (ICLR 2025)
โ13Updated 2 months ago
Alternatives and similar repositories for SimPER
Users that are interested in SimPER are comparing it to the libraries listed below
Sorting:
- [๐๐๐๐๐ ๐ ๐ข๐ง๐๐ข๐ง๐ ๐ฌ ๐๐๐๐ & ๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐ซ๐๐ฅ] ๐๐ฏ๐ฉ๐ข๐ฏ๐ค๐ช๐ฏ๐จ ๐๐ข๐ต๐ฉ๐ฆ๐ฎ๐ข๐ต๐ช๐ค๐ข๐ญ ๐๐ฆ๐ข๐ด๐ฐ๐ฏ๐ช๐ฏโฆโ51Updated last year
- Directional Preference Alignmentโ57Updated 9 months ago
- โ40Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewardsโ44Updated 2 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learningโ37Updated last week
- Self-Supervised Alignment with Mutual Informationโ19Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"โ30Updated last year
- Rewarded soups official implementationโ58Updated last year
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).โ16Updated 5 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or reโฆโ32Updated 9 months ago
- โ28Updated last year
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)โ63Updated 10 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervisionโ121Updated 9 months ago
- โ67Updated last year
- โ15Updated last year
- โ51Updated 2 months ago
- Learning adapter weights from task descriptionsโ19Updated last year
- RENT (Reinforcement Learning via Entropy Minimization) is an unsupervised method for training reasoning LLMs.โ28Updated 3 weeks ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignmentโ57Updated last year
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradientsโ26Updated 9 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.โ26Updated last year
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)โ30Updated last month
- Critique-out-Loud Reward Modelsโ66Updated 8 months ago
- โ48Updated last month
- โ59Updated 9 months ago
- โ27Updated 2 years ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.โ25Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).โ54Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversityโ43Updated last year
- This is a unified platform for implementing and evaluating test-time reasoning mechanisms in Large Language Models (LLMs).โ19Updated 5 months ago