ZHZisZZ / modpoView external linksLinks
[ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization
☆96Aug 20, 2024Updated last year
Alternatives and similar repositories for modpo
Users that are interested in modpo are comparing it to the libraries listed below
Sorting:
- This repo support auto line plot for multi-seed event file from TensorBoard☆12Jun 23, 2022Updated 3 years ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆29Oct 30, 2024Updated last year
- Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lo…☆16Nov 27, 2024Updated last year
- Directional Preference Alignment☆58Sep 23, 2024Updated last year
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Sep 24, 2024Updated last year
- ☆24Oct 14, 2024Updated last year
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆53Jun 24, 2024Updated last year
- Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication☆21Mar 21, 2024Updated last year
- TOD-Flow: Modeling the Structure of Task-Oriented Dialogues☆13Feb 7, 2024Updated 2 years ago
- Documentation at☆14Mar 27, 2025Updated 10 months ago
- ☆38Oct 2, 2024Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆66Dec 10, 2024Updated last year
- ☆24Jun 5, 2021Updated 4 years ago
- Official repository for ORPO☆471May 31, 2024Updated last year
- The repo for using the model https://huggingface.co/thu-coai/Attacker-v0.1☆13Apr 23, 2025Updated 9 months ago
- Python library for solving reinforcement learning (RL) problems using generative models.☆11Feb 18, 2025Updated 11 months ago
- The implement of LLMTreeRec☆14Dec 9, 2024Updated last year
- The official GitHub page for paper "NegativePrompt: Leveraging Psychology for Large Language Models Enhancement via Negative Emotional St…☆25May 10, 2024Updated last year
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆58Nov 8, 2024Updated last year
- [ICLR 2025] This repository contains the code to reproduce the results from our paper From Sparse Dependence to Sparse Attention: Unveili…☆12Mar 7, 2025Updated 11 months ago
- ☆12Jan 2, 2024Updated 2 years ago
- ☆13Jan 22, 2025Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75May 20, 2025Updated 8 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆32Jan 7, 2026Updated last month
- ☆25Jul 15, 2025Updated 7 months ago
- RewardBench: the first evaluation tool for reward models.☆687Jan 31, 2026Updated 2 weeks ago
- A recipe for online RLHF and online iterative DPO.☆539Dec 28, 2024Updated last year
- A fork of the PEFT library, supporting Robust Adaptation (RoSA)☆15Aug 16, 2024Updated last year
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Aug 19, 2024Updated last year
- Implementation of VALOR (Variational Option Discovery Algorithms)☆10Jun 28, 2019Updated 6 years ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Mar 30, 2024Updated last year
- [NeurIPS 2024] The implementation of paper "On Softmax Direct Preference Optimization for Recommendation"☆96Nov 29, 2024Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆129Jul 10, 2024Updated last year
- [ICLR 2025 Spotlight] Weak-to-strong preference optimization: stealing reward from weak aligned model☆16Feb 24, 2025Updated 11 months ago
- The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization…☆16Feb 15, 2024Updated 2 years ago
- Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts☆16Feb 26, 2024Updated last year
- ☆15May 22, 2025Updated 8 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Feb 29, 2024Updated last year
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 5 months ago