DAMO-NLP-SG / LongPO
[ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization
☆35Updated 2 months ago
Alternatives and similar repositories for LongPO
Users that are interested in LongPO are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆93Updated last week
- General Reasoner: Advancing LLM Reasoning Across All Domains☆82Updated last week
- ☆63Updated last week
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 4 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 7 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆84Updated 7 months ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆69Updated 5 months ago
- Long Context Extension and Generalization in LLMs☆54Updated 7 months ago
- HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models☆43Updated 5 months ago
- Revisiting Mid-training in the Era of RL Scaling☆37Updated 3 weeks ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆69Updated last month
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆62Updated 6 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆52Updated 2 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 11 months ago
- ☆59Updated 8 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆58Updated 4 months ago
- ☆78Updated 4 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆101Updated 2 months ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆34Updated 10 months ago
- This the implementation of LeCo☆31Updated 3 months ago
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆39Updated 5 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆38Updated last year
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆55Updated 7 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆51Updated 3 months ago
- The code and data for the paper JiuZhang3.0☆44Updated 11 months ago
- Official implementation for "Law of the Weakest Link: Cross capabilities of Large Language Models"☆42Updated 7 months ago
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆38Updated last week
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆25Updated last month
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆101Updated 3 months ago