hao-ai-lab / DynasorView external linksLinks
[NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.
☆220May 31, 2025Updated 8 months ago
Alternatives and similar repositories for Dynasor
Users that are interested in Dynasor are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆55Oct 29, 2024Updated last year
- ☆28May 24, 2025Updated 8 months ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆46Jan 21, 2026Updated 3 weeks ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆70Nov 4, 2024Updated last year
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆65Oct 31, 2025Updated 3 months ago
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆13Jan 16, 2026Updated last month
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆121May 19, 2025Updated 8 months ago
- ☆53May 19, 2025Updated 8 months ago
- Kaggle AIMO2 solution with token-efficient reasoning LLM recipes☆42Aug 7, 2025Updated 6 months ago
- ☆54Jul 7, 2025Updated 7 months ago
- ☆64Dec 3, 2024Updated last year
- ☆762Dec 23, 2025Updated last month
- ☆23Jul 29, 2025Updated 6 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,183Sep 30, 2025Updated 4 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 9 months ago
- ☆46Jun 11, 2025Updated 8 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆142Dec 4, 2024Updated last year
- ☆21Dec 6, 2025Updated 2 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Apr 12, 2024Updated last year
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Feb 8, 2025Updated last year
- A selective knowledge distillation algorithm for efficient speculative decoders☆36Nov 27, 2025Updated 2 months ago
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of…☆78May 30, 2025Updated 8 months ago
- Official Repo for Open-Reasoner-Zero☆2,085Jun 2, 2025Updated 8 months ago
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆25Feb 21, 2025Updated 11 months ago
- ☆63Jun 12, 2025Updated 8 months ago
- ☆85Oct 17, 2025Updated 3 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆61Oct 2, 2025Updated 4 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆182Jul 23, 2025Updated 6 months ago
- ☆129Jun 6, 2025Updated 8 months ago
- Democratizing Reinforcement Learning for LLMs☆5,106Updated this week
- ☆82Apr 3, 2025Updated 10 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architect…☆134Jan 31, 2026Updated 2 weeks ago
- OneEdit: A Neural-Symbolic Collaboratively Knowledge Editing System.☆19Oct 14, 2024Updated last year
- ☆20May 14, 2025Updated 9 months ago
- "FusionFactory: Fusing LLM Capabilities with Routing Data", Tao Feng, Haozhen Zhang, Zijie Lei, Pengrui Han, Mostofa Patwary, Mohammad Sh…☆19Dec 30, 2025Updated last month
- ☆104Dec 6, 2024Updated last year
- ☆35Jan 12, 2026Updated last month
- FlashInfer: Kernel Library for LLM Serving☆4,935Updated this week
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60May 28, 2024Updated last year