Train speculative decoding models effortlessly and port them smoothly to SGLang serving.
☆777Apr 2, 2026Updated last week
Alternatives and similar repositories for SpecForge
Users that are interested in SpecForge are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,273Feb 20, 2026Updated last month
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆287Apr 2, 2026Updated last week
- FlashInfer: Kernel Library for LLM Serving☆5,372Updated this week
- Materials for learning SGLang☆799Jan 5, 2026Updated 3 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,203Apr 8, 2026Updated last week
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆381Apr 22, 2025Updated 11 months ago
- NVIDIA Inference Xfer Library (NIXL)☆970Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆25,643Updated this week
- Perplexity GPU Kernels☆565Nov 7, 2025Updated 5 months ago
- My learning notes for ML SYS.☆5,970Apr 8, 2026Updated last week
- ☆65Apr 26, 2025Updated 11 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,071Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,403Updated this week
- slime is an LLM post-training framework for RL Scaling.☆5,264Updated this week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,996Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,478Updated this week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆335Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,997Updated this week
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,436Updated this week
- DFlash: Block Diffusion for Flash Speculative Decoding☆1,016Apr 8, 2026Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆826Mar 6, 2025Updated last year
- A Datacenter Scale Distributed Inference Serving Framework☆6,527Updated this week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆413Apr 8, 2026Updated last week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,185Mar 31, 2026Updated 2 weeks ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆279Aug 31, 2024Updated last year
- LLM KV cache compression made easy☆1,021Updated this week
- ☆209May 5, 2025Updated 11 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,284Aug 28, 2025Updated 7 months ago
- ☆19Dec 24, 2024Updated last year
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆206Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,080Apr 8, 2026Updated last week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆940Feb 28, 2026Updated last month
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆535Feb 10, 2025Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆174Feb 11, 2026Updated 2 months ago
- A Quirky Assortment of CuTe Kernels☆924Updated this week
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆156Dec 23, 2025Updated 3 months ago
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,814Updated this week
- (best/better) practices of megatron on veRL and tuning guide☆132Sep 26, 2025Updated 6 months ago