general-preference / general-preference-modelLinks
[ICML 2025] Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment (https://arxiv.org/abs/2410.02197)
☆38Updated 4 months ago
Alternatives and similar repositories for general-preference-model
Users that are interested in general-preference-model are comparing it to the libraries listed below
Sorting:
- ☆23Updated last year
- ☆15Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated 3 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Benchmarking Benchmark Leakage in Large Language Models☆58Updated last year
- Evaluate the Quality of Critique☆36Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated last year
- AbstainQA, ACL 2024☆28Updated last year
- The source code for running LLMs on the AAAR-1.0 benchmark.☆17Updated 9 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆52Updated 7 months ago
- ☆30Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 7 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆18Updated last year
- The code and data for the paper JiuZhang3.0☆49Updated last year
- A trainable user simulator☆34Updated 6 months ago
- ☆53Updated 10 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆46Updated 8 months ago
- ☆17Updated 5 months ago
- Directional Preference Alignment☆58Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 5 months ago
- ☆58Updated last year
- ☆12Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆120Updated 8 months ago
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- Code for "[COLM'25] RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing"☆22Updated 9 months ago