fe1ixxu / CPO_SIMPO
This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.
☆47Updated 5 months ago
Alternatives and similar repositories for CPO_SIMPO:
Users that are interested in CPO_SIMPO are comparing it to the libraries listed below
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆29Updated 7 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆22Updated 2 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated 11 months ago
- Towards Systematic Measurement for Long Text Quality☆31Updated 4 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago
- BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆68Updated last month
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆66Updated 2 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆43Updated 7 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆48Updated 3 months ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆108Updated 6 months ago
- Code implementation of synthetic continued pretraining☆82Updated 3 weeks ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆42Updated last month
- ☆94Updated 4 months ago
- ☆33Updated 10 months ago
- ☆48Updated 9 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆106Updated 6 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆131Updated 3 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆41Updated last week
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated last month
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated 10 months ago
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- ☆16Updated 11 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated last year
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- ☆64Updated 11 months ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆89Updated last year
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)☆63Updated last year
- Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆69Updated 2 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆33Updated 4 months ago