microsoft / SparTALinks
☆159Updated last year
Alternatives and similar repositories for SparTA
Users that are interested in SparTA are comparing it to the libraries listed below
Sorting:
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆224Updated 2 years ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆55Updated 2 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆141Updated 2 years ago
- A lightweight design for computation-communication overlap.☆190Updated last month
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 8 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆68Updated 8 months ago
- ☆80Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆90Updated 3 years ago
- play gemm with tvm☆92Updated 2 years ago
- This repository contains integer operators on GPUs for PyTorch.☆223Updated 2 years ago
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆63Updated 2 weeks ago
- nnScaler: Compiling DNN models for Parallel Training☆120Updated 2 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- ☆102Updated last year
- ☆83Updated 3 years ago
- Github mirror of trition-lang/triton repo.☆100Updated this week
- ☆57Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆123Updated last year
- DietCode Code Release☆64Updated 3 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated last year
- Tile-based language built for AI computation across all scales☆82Updated this week
- Artifacts of EVT ASPLOS'24☆28Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆144Updated 2 months ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 3 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆331Updated last year
- Implement Flash Attention using Cute.☆97Updated 11 months ago
- ☆246Updated last year
- FlashSparse significantly reduces the computation redundancy for unstructured sparsity (for SpMM and SDDMM) on Tensor Cores through a Swa…☆35Updated 2 months ago
- ☆83Updated 10 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆190Updated 10 months ago