QwenLM / Self-LengthenLinks
☆87Updated 7 months ago
Alternatives and similar repositories for Self-Lengthen
Users that are interested in Self-Lengthen are comparing it to the libraries listed below
Sorting:
- The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆138Updated 3 weeks ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆37Updated 3 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains☆141Updated 2 weeks ago
- Code for "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆74Updated 8 months ago
- Reformatted Alignment☆113Updated 9 months ago
- ☆56Updated 7 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆61Updated 2 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆99Updated last month
- ☆68Updated 3 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆102Updated 3 months ago
- DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents☆135Updated last week
- ☆116Updated 3 weeks ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆70Updated 7 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 7 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated last month
- ☆80Updated 5 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Codebase for Instruction Following without Instruction Tuning☆34Updated 9 months ago
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆93Updated last month
- ☆86Updated last month
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆72Updated 3 months ago
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆29Updated last year
- ☆29Updated 2 months ago
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆54Updated last month
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆76Updated 7 months ago
- The official repository of the Omni-MATH benchmark.☆84Updated 6 months ago
- Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models☆36Updated 8 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆159Updated 2 weeks ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆78Updated last year