shishishu / LLM-Inference-AccelerationLinks
LLM Inference with Deep Learning Accelerator.
☆39Updated 4 months ago
Alternatives and similar repositories for LLM-Inference-Acceleration
Users that are interested in LLM-Inference-Acceleration are comparing it to the libraries listed below
Sorting:
- Summary of some awesome work for optimizing LLM inference☆73Updated this week
- Curated collection of papers in MoE model inference☆187Updated 3 months ago
- Implement some method of LLM KV Cache Sparsity☆32Updated 11 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆253Updated 2 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆144Updated 3 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆114Updated last year
- ☆76Updated last month
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆92Updated this week
- ☆66Updated 7 months ago
- ☆36Updated 9 months ago
- ☆138Updated 3 months ago
- nnScaler: Compiling DNN models for Parallel Training☆113Updated last month
- ATC23 AE☆45Updated 2 years ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆54Updated 10 months ago
- ☆54Updated last year
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆304Updated 3 months ago
- a curated list of high-quality papers on resource-efficient LLMs 🌱☆122Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTe☆188Updated 4 months ago
- A lightweight design for computation-communication overlap.☆132Updated 3 weeks ago
- Penn CIS 5650 (GPU Programming and Architecture) Final Project☆31Updated last year
- High performance Transformer implementation in C++.☆124Updated 4 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆211Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆61Updated 11 months ago
- ☆73Updated 2 weeks ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆83Updated last month
- Here are my personal paper reading notes (including cloud computing, resource management, systems, machine learning, deep learning, and o…☆105Updated this week
- Implement Flash Attention using Cute.☆85Updated 5 months ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆36Updated last month
- ☆96Updated 8 months ago