Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity
☆239Sep 24, 2023Updated 2 years ago
Alternatives and similar repositories for flash-llm
Users that are interested in flash-llm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆63Mar 25, 2025Updated last year
- ☆166Jul 22, 2024Updated last year
- GPTQ inference TVM kernel☆40Apr 25, 2024Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆422Mar 5, 2026Updated last month
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆57May 29, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A recommendation model kernel optimizing system☆12Jun 5, 2025Updated 10 months ago
- A library of GPU kernels for sparse matrix operations.☆285Nov 24, 2020Updated 5 years ago
- ☆20Sep 28, 2024Updated last year
- An Optimizing Compiler for Recommendation Model Inference☆26Jun 5, 2025Updated 10 months ago
- Horizontal Fusion☆24Jan 7, 2022Updated 4 years ago
- ☆354Apr 2, 2024Updated 2 years ago
- ☆261Jul 11, 2024Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆510Aug 1, 2024Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆953Mar 29, 2026Updated 2 weeks ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,051Sep 4, 2024Updated last year
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆144Mar 31, 2023Updated 3 years ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆822Mar 6, 2025Updated last year
- A low-latency & high-throughput serving engine for LLMs☆490Jan 8, 2026Updated 3 months ago
- play gemm with tvm☆91Jul 22, 2023Updated 2 years ago
- Artifact for USENIX ATC'23: TC-GNN: Bridging Sparse GNN Computation and Dense Tensor Cores on GPUs.☆56Oct 16, 2023Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆498Updated this week
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Dynamic Memory Management for Serving LLMs without PagedAttention☆474May 30, 2025Updated 10 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,284Aug 28, 2025Updated 7 months ago
- ☆114Aug 26, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,273Apr 4, 2026Updated last week
- Efficient and easy multi-instance LLM serving☆541Mar 12, 2026Updated last month
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆758Aug 6, 2025Updated 8 months ago
- ☆33Jul 17, 2024Updated last year
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆59Nov 24, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Disaggregated serving system for Large Language Models (LLMs).☆798Apr 6, 2025Updated last year
- ☆32Aug 24, 2022Updated 3 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 9 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆999Sep 19, 2024Updated last year
- Yinghan's Code Sample☆366Jul 25, 2022Updated 3 years ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆262Nov 18, 2024Updated last year