shishishu / LLM-Inference-AccelerationLinks
LLM Inference with Deep Learning Accelerator.
☆56Updated 11 months ago
Alternatives and similar repositories for LLM-Inference-Acceleration
Users that are interested in LLM-Inference-Acceleration are comparing it to the libraries listed below
Sorting:
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Updated last year
- Summary of some awesome work for optimizing LLM inference☆151Updated 3 weeks ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆282Updated 9 months ago
- Implement some method of LLM KV Cache Sparsity☆41Updated last year
- ☆154Updated 9 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆259Updated 2 weeks ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆87Updated this week
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆139Updated this week
- Curated collection of papers in MoE model inference☆320Updated 2 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆113Updated 5 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆119Updated 7 months ago
- ☆45Updated last year
- High performance Transformer implementation in C++.☆146Updated 11 months ago
- ☆83Updated 8 months ago
- ATC23 AE☆47Updated 2 years ago
- LLM training technologies developed by kwai☆67Updated last month
- Implement Flash Attention using Cute.☆98Updated last year
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆95Updated last week
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆123Updated 4 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆66Updated last year
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆300Updated 6 months ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆47Updated last year
- A lightweight design for computation-communication overlap.☆200Updated 2 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆93Updated this week
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 9 months ago
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆404Updated 9 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆230Updated 2 years ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆51Updated last week
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆122Updated 2 years ago
- ☆103Updated last year