[ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization
☆96Aug 20, 2024Updated last year
Alternatives and similar repositories for modpo
Users that are interested in modpo are comparing it to the libraries listed below
Sorting:
- This repo support auto line plot for multi-seed event file from TensorBoard☆12Jun 23, 2022Updated 3 years ago
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆39Aug 2, 2024Updated last year
- ☆28Jul 16, 2024Updated last year
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆29Oct 30, 2024Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆79Jun 10, 2025Updated 8 months ago
- Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lo…☆16Nov 27, 2024Updated last year
- Directional Preference Alignment☆58Sep 23, 2024Updated last year
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Sep 24, 2024Updated last year
- ☆24Oct 14, 2024Updated last year
- Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication☆21Mar 21, 2024Updated last year
- Documentation at☆14Mar 27, 2025Updated 11 months ago
- TOD-Flow: Modeling the Structure of Task-Oriented Dialogues☆13Feb 7, 2024Updated 2 years ago
- ☆38Oct 2, 2024Updated last year
- ☆24Jun 5, 2021Updated 4 years ago
- The implement of LLMTreeRec☆14Dec 9, 2024Updated last year
- Python library for solving reinforcement learning (RL) problems using generative models.☆11Feb 18, 2025Updated last year
- A Multi-Session and Multi-Therapy Benchmark for High-Realism AI Psychological Counselor☆30Jan 13, 2026Updated last month
- The repo for using the model https://huggingface.co/thu-coai/Attacker-v0.1☆13Apr 23, 2025Updated 10 months ago
- The official GitHub page for paper "NegativePrompt: Leveraging Psychology for Large Language Models Enhancement via Negative Emotional St…☆25May 10, 2024Updated last year
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆58Nov 8, 2024Updated last year
- ☆12Jan 2, 2024Updated 2 years ago
- [ICLR 2025] This repository contains the code to reproduce the results from our paper From Sparse Dependence to Sparse Attention: Unveili…☆12Mar 7, 2025Updated last year
- ☆13Jan 22, 2025Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75May 20, 2025Updated 9 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆32Jan 7, 2026Updated 2 months ago
- ☆25Jul 15, 2025Updated 7 months ago
- Dateset Reset Policy Optimization☆31Apr 12, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆697Feb 16, 2026Updated 3 weeks ago
- A recipe for online RLHF and online iterative DPO.☆540Dec 28, 2024Updated last year
- Implementation of VALOR (Variational Option Discovery Algorithms)☆10Jun 28, 2019Updated 6 years ago
- A fork of the PEFT library, supporting Robust Adaptation (RoSA)☆15Aug 16, 2024Updated last year
- AutoSkill: Experience-Driven Lifelong Learning via Skill Self-Evolution☆36Updated this week
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆37Aug 19, 2024Updated last year
- [NeurIPS 2024] The implementation of paper "On Softmax Direct Preference Optimization for Recommendation"☆96Nov 29, 2024Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆128Mar 30, 2024Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆130Jul 10, 2024Updated last year
- ☆15May 22, 2025Updated 9 months ago
- [ICLR 2025 Spotlight] Weak-to-strong preference optimization: stealing reward from weak aligned model☆16Feb 24, 2025Updated last year
- The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization…☆16Feb 15, 2024Updated 2 years ago