yale-nlp / refdpo
☆15Updated 6 months ago
Alternatives and similar repositories for refdpo:
Users that are interested in refdpo are comparing it to the libraries listed below
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated 11 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated last month
- Code for paper: "LASeR: Learning to Adaptively Select Reward Models with Multi-Arm Bandits"☆13Updated 4 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated 9 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆21Updated 2 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆34Updated last year
- ☆13Updated 11 months ago
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆14Updated 3 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆42Updated 6 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆14Updated this week
- Official implementation of paper "Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment" (https://arxiv.or…☆21Updated this week
- The code and data for the paper JiuZhang3.0☆40Updated 8 months ago
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆18Updated this week
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆14Updated 2 months ago
- Directional Preference Alignment☆56Updated 4 months ago
- ☆20Updated 7 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆23Updated 4 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆54Updated 2 months ago
- On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆37Updated last month
- ☆64Updated 10 months ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated last week
- Self-Supervised Alignment with Mutual Information☆16Updated 8 months ago
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆41Updated last week
- ☆20Updated 3 months ago
- This repo contains code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation"☆11Updated last month
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆42Updated 2 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆39Updated 6 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆77Updated 4 months ago