☆59May 19, 2025Updated 11 months ago
Alternatives and similar repositories for speculative_prefill
Users that are interested in speculative_prefill are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆27Nov 25, 2025Updated 5 months ago
- [ACL Findings 2026] Official Implementation of "FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acc…☆31Apr 14, 2026Updated 3 weeks ago
- An LLM inference engine, written in C++☆19Mar 30, 2026Updated last month
- ☆63Jun 12, 2025Updated 10 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆68Jun 26, 2024Updated last year
- Paper-reading notes for Berkeley OS prelim exam.☆14Aug 28, 2024Updated last year
- ☆23Mar 7, 2025Updated last year
- ☆22Apr 17, 2025Updated last year
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆22Oct 10, 2024Updated last year
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Oct 3, 2025Updated 7 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆112Oct 11, 2025Updated 6 months ago
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆18Nov 4, 2025Updated 6 months ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆146Dec 4, 2024Updated last year
- ☆13Mar 11, 2018Updated 8 years ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆183Jul 12, 2024Updated last year
- Continuous Pipelined Speculative Decoding☆20Jan 4, 2026Updated 4 months ago
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆72Sep 18, 2025Updated 7 months ago
- ☆135Jun 6, 2025Updated 11 months ago
- ☆64Mar 30, 2026Updated last month
- ☆29May 24, 2025Updated 11 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆228May 31, 2025Updated 11 months ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆54Oct 29, 2024Updated last year
- Cascade Speculative Drafting☆33Apr 2, 2024Updated 2 years ago
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆54Jul 15, 2025Updated 9 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆382Jul 10, 2025Updated 9 months ago
- ☆63May 16, 2025Updated 11 months ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆132Nov 26, 2025Updated 5 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆17Dec 19, 2024Updated last year
- ☆24May 21, 2025Updated 11 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Code for my ICLR 2024 TinyPapers paper "Prune and Tune: Improving Efficient Pruning Techniques for Massive Language Models"☆16May 26, 2023Updated 2 years ago
- ☆62Oct 29, 2024Updated last year
- ☆313Jul 10, 2025Updated 9 months ago
- [ICLR2026] The first W4A4KV4 quantized + 50% sparse LLMs!☆26Jan 26, 2026Updated 3 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆87Jul 11, 2025Updated 9 months ago
- An efficient and scalable attention module designed to reduce memory usage and improve inference speed in large language models. Designe…☆21Jun 25, 2025Updated 10 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 6 months ago