yale-nlp / refdpo
☆15Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for refdpo
- Codebase for Instruction Following without Instruction Tuning☆31Updated last month
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆30Updated 3 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆36Updated 8 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆33Updated 10 months ago
- Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆22Updated last month
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated 8 months ago
- Directional Preference Alignment☆50Updated last month
- ☆14Updated 9 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆30Updated 6 months ago
- InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆53Updated this week
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆33Updated last month
- ☆30Updated this week
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆69Updated last month
- We introduce EMMET and unify model editing with popular algorithms ROME and MEMIT.☆12Updated 2 months ago
- A Closer Look into Mixture-of-Experts in Large Language Models☆40Updated 3 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆12Updated 2 weeks ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆17Updated last week
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆56Updated 8 months ago
- Long Context Extension and Generalization in LLMs☆39Updated 2 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆39Updated 3 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆44Updated 10 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆36Updated last month
- Official implementation of paper "General Preference Modeling with Preference Representations for Aligning Language Models" (https://arxi…☆18Updated 3 weeks ago
- The code and data for the paper JiuZhang3.0☆35Updated 5 months ago
- ☆64Updated 7 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆25Updated 4 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆49Updated last week
- ☆21Updated 5 months ago
- ☆16Updated 4 months ago
- ☆26Updated last year