fe1ixxu / CPO_SIMPO
This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.
☆51Updated 6 months ago
Alternatives and similar repositories for CPO_SIMPO:
Users that are interested in CPO_SIMPO are comparing it to the libraries listed below
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆75Updated last year
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆72Updated 8 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆22Updated 3 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆135Updated 4 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated 2 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆29Updated 8 months ago
- ☆96Updated 5 months ago
- Towards Systematic Measurement for Long Text Quality☆32Updated 5 months ago
- ☆47Updated 10 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated 11 months ago
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆46Updated 8 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆50Updated 5 months ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆19Updated 6 months ago
- ☆33Updated 11 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 5 months ago
- Code implementation of synthetic continued pretraining☆91Updated last month
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆100Updated last week
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆70Updated 3 months ago
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆25Updated 6 months ago
- A collection of instruction data and scripts for machine translation.☆20Updated last year
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆43Updated 2 months ago
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆74Updated 8 months ago
- ☆67Updated last year
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆29Updated 10 months ago
- ☆68Updated last year