sihyeong / Awesome-LLM-Inference-EngineLinks
☆138Updated 4 months ago
Alternatives and similar repositories for Awesome-LLM-Inference-Engine
Users that are interested in Awesome-LLM-Inference-Engine are comparing it to the libraries listed below
Sorting:
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆376Updated 7 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆278Updated 4 months ago
- ☆43Updated last year
- Awesome list for LLM quantization☆326Updated last week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆439Updated this week
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆567Updated last year
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆221Updated 2 months ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆75Updated 4 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆199Updated 2 weeks ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆120Updated 6 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆278Updated 7 months ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆64Updated 5 months ago
- Curated collection of papers in MoE model inference☆285Updated last month
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆566Updated 3 weeks ago
- ☆148Updated 7 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆115Updated last year
- Summary of some awesome work for optimizing LLM inference☆120Updated 4 months ago
- A low-latency & high-throughput serving engine for LLMs☆431Updated last week
- ☆96Updated 6 months ago
- LLM Inference with Deep Learning Accelerator.☆52Updated 9 months ago
- ☆141Updated 3 months ago
- Materials for learning SGLang☆615Updated 3 weeks ago
- ☆136Updated last year
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆81Updated 2 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆60Updated 11 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆108Updated 3 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆338Updated 3 months ago
- ☆74Updated last week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆104Updated this week
- Stateful LLM Serving☆87Updated 7 months ago