Jingyu6 / speculative_prefillView external linksLinks
☆53May 19, 2025Updated 8 months ago
Alternatives and similar repositories for speculative_prefill
Users that are interested in speculative_prefill are comparing it to the libraries listed below
Sorting:
- ☆27Nov 25, 2025Updated 2 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆29Nov 22, 2025Updated 2 months ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆21Oct 10, 2024Updated last year
- ☆22Mar 7, 2025Updated 11 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Oct 3, 2025Updated 4 months ago
- Continuous Pipelined Speculative Decoding☆16Jan 4, 2026Updated last month
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆65Jun 26, 2024Updated last year
- ☆21Apr 17, 2025Updated 9 months ago
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆17Nov 4, 2025Updated 3 months ago
- An LLM inference engine, written in C++☆18Feb 5, 2026Updated last week
- ☆14Jan 24, 2025Updated last year
- ☆60Jan 12, 2026Updated last month
- ☆28May 24, 2025Updated 8 months ago
- CoV: Chain-of-View Prompting for Spatial Reasoning☆50Jan 23, 2026Updated 3 weeks ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆55Oct 29, 2024Updated last year
- Source code of paper ''KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing''☆31Oct 24, 2024Updated last year
- ☆15Apr 11, 2024Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Dec 19, 2024Updated last year
- Official Implementation of our paper "THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning".☆29Sep 19, 2025Updated 4 months ago
- ☆62Oct 29, 2024Updated last year
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆70Sep 18, 2025Updated 4 months ago
- Paper-reading notes for Berkeley OS prelim exam.☆14Aug 28, 2024Updated last year
- ☆23May 21, 2025Updated 8 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆109Oct 11, 2025Updated 4 months ago
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34May 28, 2025Updated 8 months ago
- Emergent Hierarchical Reasoning in LLMs/VLMs through Reinforcement Learning☆60Oct 24, 2025Updated 3 months ago
- A comprehensive and efficient long-context model evaluation framework☆30Updated this week
- The Official Implementation of Ada-KV [NeurIPS 2025]☆126Nov 26, 2025Updated 2 months ago
- 16-fold memory access reduction with nearly no loss☆110Mar 26, 2025Updated 10 months ago
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆49Jul 15, 2025Updated 6 months ago
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Apr 22, 2025Updated 9 months ago
- Code for my ICLR 2024 TinyPapers paper "Prune and Tune: Improving Efficient Pruning Techniques for Massive Language Models"☆16May 26, 2023Updated 2 years ago
- [ICLR2026] The first W4A4KV4 quantized + 50% sparse LLMs!☆22Jan 26, 2026Updated 2 weeks ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆56Nov 20, 2024Updated last year
- ☆303Jul 10, 2025Updated 7 months ago
- ☆16Jul 23, 2024Updated last year