yale-nlp / refdpo
☆15Updated 8 months ago
Alternatives and similar repositories for refdpo:
Users that are interested in refdpo are comparing it to the libraries listed below
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Codebase for Instruction Following without Instruction Tuning☆33Updated 6 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆37Updated last year
- Code for paper: "LASeR: Learning to Adaptively Select Reward Models with Multi-Arm Bandits"☆13Updated 5 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆23Updated 5 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆41Updated 7 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆45Updated 2 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- Self-Supervised Alignment with Mutual Information☆16Updated 9 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated 10 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆24Updated 3 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆43Updated 7 months ago
- The code and data for the paper JiuZhang3.0☆42Updated 9 months ago
- Automatic prompt optimization framework for multi-step agent tasks.☆28Updated 4 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 3 months ago
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆15Updated 4 months ago
- ☆12Updated 3 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆26Updated 6 months ago
- ☆16Updated 2 months ago
- ☆12Updated last year
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆22Updated 3 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆44Updated last month
- ☆29Updated 2 months ago
- Official implementation of paper "Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment" (https://arxiv.or…☆22Updated last month
- The code of arXiv paper: "Dynamic Scaling of Unit Tests for Code Reward Modeling"☆16Updated 2 months ago
- Repository for Skill Set Optimization☆12Updated 7 months ago