AmadeusChan / Awesome-LLM-System-Papers
☆562Updated 3 weeks ago
Alternatives and similar repositories for Awesome-LLM-System-Papers:
Users that are interested in Awesome-LLM-System-Papers are comparing it to the libraries listed below
- paper and its code for AI System☆286Updated 2 months ago
- Large Language Model (LLM) Systems Paper List☆897Updated this week
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆240Updated 3 weeks ago
- Disaggregated serving system for Large Language Models (LLMs).☆529Updated 7 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆333Updated last week
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆667Updated last week
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆425Updated 6 months ago
- A low-latency & high-throughput serving engine for LLMs☆334Updated 2 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆359Updated last week
- A large-scale simulation framework for LLM inference☆356Updated 4 months ago
- A curated list of awesome projects and papers for distributed training or inference☆226Updated 5 months ago
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆257Updated last month
- ☆311Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆143Updated 2 years ago
- Curated collection of papers in machine learning systems☆271Updated last month
- Latency and Memory Analysis of Transformer Models for Training and Inference☆402Updated 3 weeks ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆245Updated last week
- Materials for learning SGLang☆360Updated last week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆301Updated 9 months ago
- Curated collection of papers in MoE model inference☆123Updated last month
- Efficient and easy multi-instance LLM serving☆352Updated this week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆265Updated 4 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆433Updated 8 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…