xlite-dev / Awesome-LLM-InferenceLinks
πA curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.π
β4,909Updated last month
Alternatives and similar repositories for Awesome-LLM-Inference
Users that are interested in Awesome-LLM-Inference are comparing it to the libraries listed below
Sorting:
- My learning notes for ML SYS.β5,077Updated last week
- FlashInfer: Kernel Library for LLM Servingβ4,707Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabiliβ¦β3,845Updated this week
- A curated list for Efficient Large Language Modelsβ1,935Updated 7 months ago
- Large Language Model (LLM) Systems Paper Listβ1,765Updated last week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Accelerationβ3,420Updated 6 months ago
- Awesome LLM compression research papers and tools.β1,759Updated 2 months ago
- [TMLR 2024] Efficient Large Language Models: A Surveyβ1,249Updated 7 months ago
- how to optimize some algorithm in cuda.β2,769Updated last week
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)β8,851Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.β4,600Updated this week
- πLeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginnersπ, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.πβ9,382Updated 2 weeks ago
- Material for gpu-mode lecturesβ5,588Updated last month
- π° Must-read papers and blogs on Speculative Decoding β‘οΈβ1,082Updated last month
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLMβ2,580Updated last week
- π° Must-read papers and blogs on LLM based Long Context Modeling π₯β1,878Updated last week
- slime is an LLM post-training framework for RL Scaling.β3,466Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.β7,544Updated this week
- π Efficient implementations of state-of-the-art linear attention modelsβ4,243Updated last week
- LLM notes, including model inference, transformer model structure, and llm framework code analysis notes.β859Updated last month
- A self-learning tutorail for CUDA High Performance Programing.β838Updated last week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:β2,305Updated 8 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Modelsβ1,590Updated last year
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.β1,513Updated last month
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β3,092Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β2,220Updated 5 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Headsβ2,694Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).β2,133Updated last week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ4,739Updated this week
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Modelsβ2,663Updated this week