Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"
☆41Sep 24, 2024Updated last year
Alternatives and similar repositories for WPO
Users that are interested in WPO are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆13Jun 11, 2024Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆42Mar 23, 2023Updated 3 years ago
- [EMNLP 2025 Findings] Familiarity-aware Evidence Compression for Retrieval Augmented Generation☆14Aug 20, 2025Updated 7 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆29Dec 19, 2023Updated 2 years ago
- This is the repository for the resources in TACL 2022 Paper "Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inf…☆14Aug 17, 2022Updated 3 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [EMNLP 2022] Summarization as Indirect Supervision for Relation Extraction (SuRE)☆27Nov 22, 2022Updated 3 years ago
- 从socket开始实现pop3和smtp客户端,实现邮件编写、发送、接收、阅读、删除等基本功能。并实现简单界面(PyQt5)Start from socket to implement pop3 and smtp clients, to realize the basic …☆12Dec 24, 2023Updated 2 years ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆86Nov 10, 2024Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆152Feb 14, 2025Updated last year
- Official Implementation for "Can NLI Provide Proper Indirect Supervision for Low-resource Biomedical Relation Extraction?" ACL 2023☆15Jun 17, 2023Updated 2 years ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆50Oct 23, 2024Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆948Feb 16, 2025Updated last year
- 基于Llama3,通过进一步CPT,SFT,ORPO得到的中文版Llama3☆16Apr 24, 2024Updated last year
- ☆20Dec 14, 2024Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆56Jun 16, 2024Updated last year
- ☆11May 28, 2024Updated last year
- ☆27Dec 29, 2023Updated 2 years ago
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆30Aug 2, 2024Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆97Aug 20, 2024Updated last year
- Self-Supervised Alignment with Mutual Information☆20May 24, 2024Updated last year
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆125Jun 11, 2025Updated 9 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Mar 1, 2024Updated 2 years ago
- This repo is reproduction resources for linear alignment paper, still working☆18May 19, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- CFBench: A Comprehensive Constraints-Following Benchmark for LLMs☆50Aug 26, 2024Updated last year
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Nov 11, 2024Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75May 20, 2025Updated 10 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆53Jun 24, 2024Updated last year
- The predecessor of CiteLab.☆18Feb 3, 2026Updated last month
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆52May 12, 2025Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.☆705Feb 16, 2026Updated last month
- ☆78Feb 22, 2024Updated 2 years ago
- NeurIPS 2025: Discriminative Constrained Optimization for Reinforcing Large Reasoning Models☆52Mar 14, 2026Updated last week
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆57Aug 13, 2024Updated last year
- The official repository of the Omni-MATH benchmark.☆93Dec 22, 2024Updated last year
- The rule-based evaluation subset and code implementation of Omni-MATH☆27Dec 23, 2024Updated last year
- ☆24Oct 14, 2024Updated last year
- The official repo for "CodeScaler: Scaling Code LLM Training and Test-Time Inference via Execution-Free Reward Models"☆31Mar 5, 2026Updated 3 weeks ago
- Code for paper "An Improved Baseline for Sentence-level Relation Extraction", AACL-IJCNLP 2022☆91Sep 22, 2022Updated 3 years ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆21May 2, 2024Updated last year