sihyeong / Awesome-LLM-Inference-EngineLinks
☆111Updated last month
Alternatives and similar repositories for Awesome-LLM-Inference-Engine
Users that are interested in Awesome-LLM-Inference-Engine are comparing it to the libraries listed below
Sorting:
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆281Updated this week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆239Updated 2 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆264Updated 5 months ago
- ☆43Updated last year
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆525Updated 11 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆175Updated last week
- Awesome list for LLM quantization☆263Updated last week
- ☆145Updated 5 months ago
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆345Updated 5 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆110Updated last year
- ☆116Updated 10 months ago
- Summary of some awesome work for optimizing LLM inference☆95Updated 2 months ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆64Updated 2 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆101Updated last month
- Materials for learning SGLang☆522Updated 3 weeks ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆171Updated last week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆405Updated 2 months ago
- Curated collection of papers in MoE model inference☆225Updated last week
- A low-latency & high-throughput serving engine for LLMs☆402Updated 2 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆506Updated last week
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆220Updated last month
- ☆78Updated 3 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆102Updated 3 months ago
- ☆127Updated 3 weeks ago
- Efficient and easy multi-instance LLM serving☆458Updated this week
- ☆68Updated last year
- Efficient LLM Inference over Long Sequences☆387Updated last month
- LLM Serving Performance Evaluation Harness☆79Updated 5 months ago
- A large-scale simulation framework for LLM inference☆418Updated 2 weeks ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆172Updated 10 months ago