general-preference / general-preference-model
Official implementation of paper "Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment" (https://arxiv.org/abs/2410.02197)
☆23Updated 2 months ago
Alternatives and similar repositories for general-preference-model:
Users that are interested in general-preference-model are comparing it to the libraries listed below
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 3 months ago
- ☆29Updated 3 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 6 months ago
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆25Updated last month
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆22Updated 9 months ago
- ☆16Updated 8 months ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated this week
- ☆59Updated 7 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆31Updated last month
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆54Updated 10 months ago
- The code and data for the paper JiuZhang3.0☆43Updated 10 months ago
- Official implementation of AAAI 2025 paper "Augmenting Math Word Problems via Iterative Question Composing"(https://arxiv.org/abs/2401.09…☆20Updated 4 months ago
- Evaluate the Quality of Critique☆34Updated 10 months ago
- Exploration of automated dataset selection approaches at large scales.☆37Updated last month
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆28Updated 9 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆63Updated 5 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆42Updated 8 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆61Updated 5 months ago
- The source code for running LLMs on the AAAR-1.0 benchmark.☆16Updated last week
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆36Updated 3 weeks ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆59Updated 9 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 3 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆90Updated last week
- Self-Supervised Alignment with Mutual Information☆16Updated 10 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 10 months ago
- [ACL 2024] Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models☆19Updated 9 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆46Updated 9 months ago
- ☆14Updated last year