yale-nlp / refdpoLinks
☆16Updated last year
Alternatives and similar repositories for refdpo
Users that are interested in refdpo are comparing it to the libraries listed below
Sorting:
- ☆19Updated 6 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Codebase for Instruction Following without Instruction Tuning☆35Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆30Updated 2 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33Updated last year
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆13Updated 7 months ago
- ☆15Updated last year
- Exploration of automated dataset selection approaches at large scales.☆47Updated 7 months ago
- Implementation for the paper "Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning"☆11Updated 8 months ago
- ☆18Updated 2 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated last year
- ☆22Updated last year
- ☆45Updated this week
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 5 months ago
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆20Updated 10 months ago
- [ICLR 2025] Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization☆12Updated 8 months ago
- Control LLM☆19Updated 5 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 9 months ago
- ☆48Updated 7 months ago
- Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs☆39Updated last year
- On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆40Updated 2 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆55Updated 7 months ago
- ☆65Updated last year
- Code for "C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing"☆18Updated 5 months ago
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆24Updated 2 months ago
- ☆18Updated 2 months ago
- The code for "MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking"☆18Updated 8 months ago
- [ICML 2025] Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment (https://arxiv.org/abs/2410.02197)☆28Updated 3 weeks ago