haozheji / exact-optimization
ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment
☆45Updated 3 months ago
Related projects: ⓘ
- Directional Preference Alignment☆44Updated 3 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆78Updated last week
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆49Updated 3 months ago
- ☆80Updated 9 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆22Updated 3 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆89Updated 2 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆101Updated last month
- Official implementation for the paper *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆57Updated 3 weeks ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆24Updated 6 months ago
- 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆52Updated 3 weeks ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆62Updated 3 months ago
- Code and models for paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆21Updated 3 weeks ago
- ☆69Updated 10 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆46Updated 5 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆63Updated last year
- Code for the paper "Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models"☆65Updated 6 months ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆57Updated 7 months ago
- [ACL'2024] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆44Updated 3 weeks ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards"☆22Updated 2 months ago
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆66Updated 5 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly"☆121Updated 3 months ago
- The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agen…☆20Updated 6 months ago
- Self-Explore to avoid ️the p️️it! Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards☆39Updated 4 months ago
- Knowledge Circuits in Pretrained Transformers☆46Updated this week
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆61Updated 2 months ago
- Evaluating Mathematical Reasoning Beyond Accuracy☆32Updated 5 months ago
- ☆25Updated 7 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆24Updated last month
- Achieving Efficient Alignment through Learned Correction☆103Updated 3 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆33Updated last month