feifeibear / LLMSpeculativeSamplingLinks
Fast inference from large lauguage models via speculative decoding
β880Updated last year
Alternatives and similar repositories for LLMSpeculativeSampling
Users that are interested in LLMSpeculativeSampling are comparing it to the libraries listed below
Sorting:
- π° Must-read papers and blogs on Speculative Decoding β‘οΈβ1,082Updated 3 weeks ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)β353Updated 8 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.β500Updated last year
- LongBench v2 and LongBench (ACL 25'&24')β1,067Updated last year
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline modβ¦β605Updated last year
- π° Must-read papers on KV Cache Compression (constantly updating π€).β635Updated 3 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichβ¦β1,097Updated last year
- Ring attention implementation with flash attentionβ963Updated 4 months ago
- β·οΈ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)β1,003Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ626Updated this week
- β352Updated last year
- [TMLR 2024] Efficient Large Language Models: A Surveyβ1,247Updated 6 months ago
- Best practice for training LLaMA models in Megatron-LMβ664Updated 2 years ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decodingβ1,314Updated 10 months ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.β659Updated last month
- Awesome list for LLM pruning.β280Updated 3 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.β626Updated this week
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.β1,507Updated last month
- Awesome-LLM-KV-Cache: A curated list of πAwesome LLM KV Cache Papers with Codes.β405Updated 10 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**β216Updated 11 months ago
- Disaggregated serving system for Large Language Models (LLMs).β766Updated 9 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruningβ635Updated last year
- Official Implementation of "Learning Harmonized Representations for Speculative Sampling" (HASS)β52Updated 10 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,171Updated 3 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"β322Updated 10 months ago
- β299Updated 6 months ago
- Explorations into some recent techniques surrounding speculative decodingβ296Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β802Updated 10 months ago
- Awesome LLM compression research papers and tools.β1,757Updated 2 months ago
- An Efficient "Factory" to Build Multiple LoRA Adaptersβ366Updated 11 months ago