[ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization
☆96Aug 20, 2024Updated last year
Alternatives and similar repositories for modpo
Users that are interested in modpo are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆29Oct 30, 2024Updated last year
- Rewarded soups official implementation☆63Sep 27, 2023Updated 2 years ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆79Jun 10, 2025Updated 10 months ago
- ☆28Jul 16, 2024Updated last year
- This repo support auto line plot for multi-seed event file from TensorBoard☆12Jun 23, 2022Updated 3 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Directional Preference Alignment☆61Sep 23, 2024Updated last year
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Sep 24, 2024Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆67Dec 10, 2024Updated last year
- Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lo…☆16Nov 27, 2024Updated last year
- Python library for solving reinforcement learning (RL) problems using generative models.☆11Feb 18, 2025Updated last year
- [ICLR 2025] This repository contains the code to reproduce the results from our paper From Sparse Dependence to Sparse Attention: Unveili…☆12Mar 7, 2025Updated last year
- Documentation at☆14Mar 27, 2025Updated last year
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆53Jun 24, 2024Updated last year
- The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization…☆16Feb 15, 2024Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆32Jan 7, 2026Updated 3 months ago
- ☆23Oct 14, 2024Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆129Mar 30, 2024Updated 2 years ago
- SpeechFlow neural network implementation☆22Aug 8, 2024Updated last year
- Code and data for the paper "Understanding Hidden Context in Preference Learning: Consequences for RLHF"☆34Dec 14, 2023Updated 2 years ago
- ☆38Oct 2, 2024Updated last year
- Official repository for ORPO☆478May 31, 2024Updated last year
- ☆25Jul 15, 2025Updated 9 months ago
- Steering Llama 2 with Contrastive Activation Addition☆222May 23, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Recipes to train reward model for RLHF.☆1,527Apr 24, 2025Updated 11 months ago
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 7 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆51Oct 23, 2024Updated last year
- Beyond Empathy: Integrating Diagnostic and Therapeutic Reasoning with Large Language Models for Mental Health Counseling☆36Jan 24, 2026Updated 2 months ago
- code for ACL2024-main: BatchEval: Towards Human-like Text Evaluation☆19May 20, 2024Updated last year
- Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts☆16Feb 26, 2024Updated 2 years ago
- Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication☆21Mar 21, 2024Updated 2 years ago
- ☆42Oct 29, 2024Updated last year
- [NeurIPS 2024] The implementation of paper "On Softmax Direct Preference Optimization for Recommendation"☆99Nov 29, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75May 20, 2025Updated 10 months ago
- The repo for using the model https://huggingface.co/thu-coai/Attacker-v0.1☆13Apr 23, 2025Updated 11 months ago
- Evaluate the Quality of Critique☆37Jun 1, 2024Updated last year
- This repo is reproduction resources for linear alignment paper, still working☆18May 19, 2024Updated last year
- ☆14Feb 26, 2024Updated 2 years ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆17Jan 8, 2025Updated last year
- [ICML 2025] Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment (https://arxiv.org/abs/2410.02197)☆40Sep 8, 2025Updated 7 months ago