YJiangcm / BMCLinks
[ICLR 2025] Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization
☆12Updated last year
Alternatives and similar repositories for BMC
Users that are interested in BMC are comparing it to the libraries listed below
Sorting:
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated last year
- ☆14Updated last year
- ☆16Updated last year
- ☆21Updated 6 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated 2 years ago
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆20Updated last year
- ☆23Updated last year
- [ICML 2025] Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment (https://arxiv.org/abs/2410.02197)☆39Updated 5 months ago
- ☆32Updated last week
- Official repository for Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning☆12Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆19Updated 11 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Updated 4 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆45Updated 8 months ago
- ☆30Updated last year
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆24Updated 4 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Updated 4 months ago
- ☆15Updated last year
- The paper list of multilingual pre-trained models (Continual Updated).☆24Updated last year
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆21Updated last year
- Plancraft is a minecraft environment and agent suite to test planning capabilities in LLMs☆26Updated 3 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆26Updated 6 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 6 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- Evaluating the faithfulness of long-context language models☆30Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- ☆16Updated last year
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"☆32Updated 7 months ago
- Source code of our EMNLP 2024 paper "FactAlign: Long-form Factuality Alignment of Large Language Models"☆19Updated last year