Summary of some awesome work for optimizing LLM inference
☆236Feb 14, 2026Updated 2 months ago
Alternatives and similar repositories for LLM-inference-optimization-paper
Users that are interested in LLM-inference-optimization-paper are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆282Mar 6, 2025Updated last year
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,144Apr 9, 2026Updated last week
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆310Dec 5, 2025Updated 4 months ago
- A throughput-oriented high-performance serving framework for LLMs☆952Mar 29, 2026Updated 2 weeks ago
- ☆58May 4, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆323Jun 10, 2025Updated 10 months ago
- ☆31May 28, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆826Mar 6, 2025Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆801Apr 6, 2025Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Mar 15, 2024Updated 2 years ago
- Curated collection of papers in machine learning systems☆539Feb 7, 2026Updated 2 months ago
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆18Jul 10, 2025Updated 9 months ago
- ☆11Aug 4, 2022Updated 3 years ago
- Curated collection of papers in MoE model inference☆371Mar 12, 2026Updated last month
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- A reference implementation of the Mind Mappings Framework.☆30Dec 2, 2021Updated 4 years ago
- ☆14Dec 5, 2024Updated last year
- paper and its code for AI System☆357Feb 10, 2026Updated 2 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆478May 30, 2025Updated 10 months ago
- ☆34Dec 19, 2025Updated 3 months ago
- OSDI 2023 Welder, deeplearning compiler☆33Nov 24, 2023Updated 2 years ago
- ☆119May 16, 2025Updated 11 months ago
- Large Language Model (LLM) Systems Paper List☆1,918Mar 24, 2026Updated 3 weeks ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆295Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- SGLang is a fast serving framework for large language models and vision language models.☆30Updated this week
- ☆16Nov 22, 2022Updated 3 years ago
- ☆635Jan 14, 2026Updated 3 months ago
- Paper reading and discussion notes, covering AI frameworks, distributed systems, cluster management, etc.☆59Mar 4, 2026Updated last month
- 基于Xilinx FPGA的通用型 CNN卷积神经网络加速器,本设计基于KV260板卡,MpSoC架构均可移植☆20Dec 13, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆491Jan 8, 2026Updated 3 months ago
- ☆89Apr 2, 2022Updated 4 years ago
- 2023/12/22 电三 420 每周会议技术分享:「容器」的 slides 和附件☆10Dec 22, 2023Updated 2 years ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,122Updated this week
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆538Mar 19, 2026Updated last month
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆94Jun 16, 2025Updated 10 months ago
- how to optimize some algorithm in cuda.☆2,925Apr 9, 2026Updated last week
- ☆242Oct 24, 2025Updated 5 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,372Apr 11, 2026Updated last week
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆53Mar 27, 2024Updated 2 years ago
- Loads and runs Linux RISC-V .elf files on Linux, MacOS, and Windows.☆16Updated this week