Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths
☆18Jul 10, 2025Updated 9 months ago
Alternatives and similar repositories for SpecDec_pp
Users that are interested in SpecDec_pp are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆50Oct 24, 2023Updated 2 years ago
- ☆28May 24, 2025Updated 10 months ago
- ☆14May 9, 2024Updated last year
- ☆19May 4, 2023Updated 2 years ago
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆18Dec 13, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- NUMA-Aware Reader-Writer Locks☆19Jun 12, 2014Updated 11 years ago
- ☆12May 13, 2025Updated 11 months ago
- Princeton University Ph.D. Dissertation Template☆18Apr 9, 2017Updated 9 years ago
- A universal workflow system for exactly-once DAGs☆23Jun 1, 2023Updated 2 years ago
- ☆10Feb 1, 2022Updated 4 years ago
- MACER: MAximizing CErtified Radius (ICLR 2020)☆31Jan 5, 2020Updated 6 years ago
- 基于电商导购机器人,自然语言理解(NLU),文本纠错,歧义词消歧☆12May 5, 2020Updated 5 years ago
- A fast and scalable distributed lock service using programmable switches.☆20Jul 30, 2024Updated last year
- Source code for Jellyfish, a soft real-time inference serving system☆15Dec 20, 2022Updated 3 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Efficient GPU communication over multiple NICs.☆27Nov 20, 2025Updated 4 months ago
- 项目的主仓库☆26Sep 11, 2022Updated 3 years ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆37Aug 29, 2025Updated 7 months ago
- ☆31May 28, 2024Updated last year
- ☆12Oct 16, 2022Updated 3 years ago
- 面向多平台编译优化的深度学习中间表示☆10Oct 28, 2024Updated last year
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Sep 10, 2024Updated last year
- ☆41Sep 13, 2025Updated 7 months ago
- ☆10Oct 31, 2022Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- PyTorch library for Active Fine-Tuning☆98Sep 27, 2025Updated 6 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 9 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆133Jun 24, 2025Updated 9 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆67Oct 2, 2025Updated 6 months ago
- (elastic) cuckoo hashing☆16Jun 20, 2020Updated 5 years ago
- TokenSim is a tool for simulating the behavior of large language models (LLMs) in a distributed environment.☆22Sep 20, 2025Updated 6 months ago
- Fast and memory-efficient exact attention☆20Updated this week
- Code used to produce experimental results for the paper "Deep Structured Prediction with Nonlinear Output Activations"☆11May 6, 2019Updated 6 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆10Dec 8, 2021Updated 4 years ago
- ☆28Apr 17, 2025Updated 11 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆35May 6, 2024Updated last year
- ☆14Dec 21, 2025Updated 3 months ago
- Source code for OSDI 2023 paper titled "Cilantro - Performance-Aware Resource Allocation for General Objectives via Online Feedback"☆40Jul 6, 2023Updated 2 years ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆44May 13, 2025Updated 11 months ago
- ☆33Oct 13, 2025Updated 6 months ago