[NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.
☆224May 31, 2025Updated 9 months ago
Alternatives and similar repositories for Dynasor
Users that are interested in Dynasor are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆67Oct 31, 2025Updated 4 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆55Oct 29, 2024Updated last year
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆77Nov 4, 2024Updated last year
- ☆56Jul 7, 2025Updated 8 months ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆49Jan 21, 2026Updated 2 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆28May 24, 2025Updated 10 months ago
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆98Mar 5, 2026Updated 3 weeks ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆123May 19, 2025Updated 10 months ago
- ☆65Dec 3, 2024Updated last year
- ☆87Oct 17, 2025Updated 5 months ago
- ☆56May 19, 2025Updated 10 months ago
- ☆762Dec 23, 2025Updated 3 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆13Jan 16, 2026Updated 2 months ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- When Reasoning Meets Its Laws☆36Jan 2, 2026Updated 2 months ago
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆26Feb 21, 2025Updated last year
- A selective knowledge distillation algorithm for efficient speculative decoders☆36Nov 27, 2025Updated 4 months ago
- Distributed ML Optimizer☆35Jul 28, 2021Updated 4 years ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Feb 8, 2025Updated last year
- ☆21Mar 18, 2026Updated last week
- ☆29Mar 24, 2025Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆291May 1, 2025Updated 10 months ago
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of…☆83May 30, 2025Updated 10 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Kaggle AIMO2 solution with token-efficient reasoning LLM recipes☆45Aug 7, 2025Updated 7 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,203Mar 9, 2026Updated 3 weeks ago
- ☆63Jun 12, 2025Updated 9 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆67Oct 2, 2025Updated 5 months ago
- [ACL 2025 Main] Repository for the paper: 500xCompressor: Generalized Prompt Compression for Large Language Models☆57Mar 9, 2026Updated 3 weeks ago
- Official Repo for Open-Reasoner-Zero☆2,088Jun 2, 2025Updated 9 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆259Mar 7, 2026Updated 3 weeks ago
- Multi-Turn RL Training System with AgentTrainer for Language Model Game Reinforcement Learning☆60Dec 18, 2025Updated 3 months ago
- ☆20Dec 24, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆20May 14, 2025Updated 10 months ago
- Some microbenchmarks and design docs before commencement☆11Feb 1, 2021Updated 5 years ago
- Democratizing Reinforcement Learning for LLMs☆5,297Updated this week
- ☆24Feb 18, 2025Updated last year
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆50Jun 17, 2025Updated 9 months ago
- A throughput-oriented high-performance serving framework for LLMs☆950Oct 29, 2025Updated 5 months ago
- ☆33Oct 13, 2025Updated 5 months ago