yale-nlp / refdpo
☆15Updated 5 months ago
Alternatives and similar repositories for refdpo:
Users that are interested in refdpo are comparing it to the libraries listed below
- Codebase for Instruction Following without Instruction Tuning☆33Updated 3 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated 10 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆33Updated last year
- Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆22Updated 3 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆13Updated last month
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated 3 weeks ago
- Code for paper: "LASeR: Learning to Adaptively Select Reward Models with Multi-Arm Bandits"☆13Updated 3 months ago
- ☆20Updated 6 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆18Updated last month
- ☆14Updated 10 months ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆20Updated last month
- ☆13Updated last year
- Official implementation of paper "General Preference Modeling with Preference Representations for Aligning Language Models" (https://arxi…☆21Updated last month
- The code and data for the paper JiuZhang3.0☆40Updated 7 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆31Updated 8 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆41Updated 5 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆32Updated 3 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆42Updated last month
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆75Updated 3 months ago
- A Closer Look into Mixture-of-Experts in Large Language Models☆41Updated 5 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆40Updated 3 weeks ago
- On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆35Updated 2 months ago
- Directional Preference Alignment☆54Updated 3 months ago
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆19Updated this week
- [NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback☆39Updated 10 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆33Updated 7 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆13Updated this week
- ☆19Updated 2 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆33Updated 5 months ago