junkangwu / Dr_DPOLinks
[ICLR 2025] Official code of "Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization"
☆18Updated last year
Alternatives and similar repositories for Dr_DPO
Users that are interested in Dr_DPO are comparing it to the libraries listed below
Sorting:
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆91Updated last year
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆27Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆78Updated 5 months ago
- [NAACL 25 main] Awesome LLM Causal Reasoning is a collection of LLM-based casual reasoning works, including papers, codes and datasets.☆93Updated last month
- ☆69Updated this week
- ☆25Updated 7 months ago
- [ICML 2025] Official code of "AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization"☆22Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆67Updated 7 months ago
- [NeurIPS 2025] What Makes a Reward Model a Good Teacher? An Optimization Perspective☆39Updated last month
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆135Updated last year
- code for paper Query-Dependent Prompt Evaluation and Optimization with Offline Inverse Reinforcement Learning☆42Updated last year
- AutoLibra: Metric Induction for Agents from Open-Ended Human Feedback☆15Updated 3 weeks ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆112Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆49Updated last year
- ☆41Updated 11 months ago
- Reinforced Multi-LLM Agents training☆56Updated 5 months ago
- Directional Preference Alignment☆57Updated last year
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆148Updated 8 months ago
- Repo of "Large Language Model-based Human-Agent Collaboration for Complex Task Solving(EMNLP2024 Findings)"☆34Updated last year
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆44Updated 3 months ago
- ☆33Updated last year
- Official implementation for "ALI-Agent: Assessing LLMs'Alignment with Human Values via Agent-based Evaluation"☆21Updated 3 months ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆37Updated 4 months ago
- The official repository of "SmartAgent: Chain-of-User-Thought for Embodied Personalized Agent in Cyber World".☆27Updated 2 months ago
- Rewarded soups official implementation☆60Updated 2 years ago
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- ☆30Updated last year
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆34Updated 9 months ago
- ☆17Updated 3 months ago