bigai-nlco / TokenSwiftLinks
[ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation
☆118Updated 6 months ago
Alternatives and similar repositories for TokenSwift
Users that are interested in TokenSwift are comparing it to the libraries listed below
Sorting:
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆244Updated 3 months ago
- ☆173Updated 7 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆204Updated this week
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆167Updated 3 weeks ago
- MiroRL is an MCP-first reinforcement learning framework for deep research agent.☆180Updated 3 months ago
- ☆85Updated 8 months ago
- Efficient Agent Training for Computer Use☆133Updated 2 months ago
- ☆91Updated 6 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models☆52Updated last year
- ☆300Updated 6 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 9 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 10 months ago
- Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.☆237Updated last month
- MiroTrain is an efficient and algorithm-first framework for post-training large agentic models.☆99Updated 3 months ago
- ☆86Updated 3 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architect…☆129Updated last month
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆55Updated 6 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆79Updated 2 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆223Updated 3 weeks ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆179Updated 4 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆123Updated 7 months ago
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆187Updated 4 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆180Updated 4 months ago
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆67Updated 7 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 4 months ago
- ☆98Updated 3 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆160Updated 2 months ago
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- ☆39Updated 4 months ago