general-preference / general-preference-modelLinks
Official implementation of ICML 2025 paper "Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment" (https://arxiv.org/abs/2410.02197)
☆28Updated last week
Alternatives and similar repositories for general-preference-model
Users that are interested in general-preference-model are comparing it to the libraries listed below
Sorting:
- Codebase for Instruction Following without Instruction Tuning☆35Updated 11 months ago
- ☆22Updated last year
- ☆14Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 3 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 5 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆16Updated last year
- RENT (Reinforcement Learning via Entropy Minimization) is an unsupervised method for training reasoning LLMs.☆39Updated 2 months ago
- The official repo of "WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents"☆64Updated last week
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 9 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆30Updated 8 months ago
- Code for "[COLM'25] RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing"☆21Updated 6 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆67Updated 10 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆49Updated last year
- Official code implementation for the ACL 2025 paper: 'Dynamic Scaling of Unit Tests for Code Reward Modeling'☆25Updated 4 months ago
- The source code for running LLMs on the AAAR-1.0 benchmark.☆17Updated 5 months ago
- ☆45Updated last week
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 4 months ago
- ☆21Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆30Updated last month
- ☆18Updated last year
- ☆53Updated 7 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆33Updated last year
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆25Updated 10 months ago
- The code and data for the paper JiuZhang3.0☆49Updated last year
- Directional Preference Alignment☆59Updated 11 months ago
- ☆28Updated 8 months ago