swtheing / PF-PPO-RLHF
☆11Updated last week
Related projects: ⓘ
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆28Updated 8 months ago
- The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agen…☆20Updated 6 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆61Updated 2 months ago
- ☆18Updated 3 months ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆24Updated 6 months ago
- 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆52Updated 3 weeks ago
- Official implementation for the paper *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆57Updated 3 weeks ago
- [ACL'2024] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆44Updated last month
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆89Updated 2 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆21Updated 3 months ago
- ☆79Updated 3 months ago
- Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆22Updated 5 months ago
- About The official GitHub page for ''Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with …☆13Updated 2 months ago
- ☆13Updated 2 months ago
- The code and data for the paper JiuZhang3.0☆29Updated 3 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆33Updated last month
- Official completion of “Training on the Benchmark Is Not All You Need”.☆18Updated last week
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆45Updated 3 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards"☆22Updated 2 months ago
- ☆12Updated 2 months ago
- ☆23Updated 3 weeks ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆86Updated 3 months ago
- Directional Preference Alignment☆44Updated 3 months ago
- Dateset Reset Policy Optimization☆27Updated 5 months ago
- Evaluating Mathematical Reasoning Beyond Accuracy☆32Updated 5 months ago
- This repo is reproduction resources for linear alignment paper, still working☆13Updated 4 months ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆57Updated 7 months ago
- ☆80Updated 9 months ago
- Feeling confused about super alignment? Here is a reading list☆42Updated 8 months ago
- Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆17Updated last month