google-research / sputnikLinks
A library of GPU kernels for sparse matrix operations.
☆271Updated 4 years ago
Alternatives and similar repositories for sputnik
Users that are interested in sputnik are comparing it to the libraries listed below
Sorting:
- ☆144Updated 7 months ago
- ☆230Updated last year
- Assembler for NVIDIA Volta and Turing GPUs☆229Updated 3 years ago
- Code for paper "Design Principles for Sparse Matrix Multiplication on the GPU" accepted to Euro-Par 2018☆73Updated 4 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆134Updated 3 years ago
- CUDA Matrix Multiplication Optimization☆221Updated last year
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆140Updated 5 years ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆51Updated 7 years ago
- ☆107Updated last year
- An extension library of WMMA API (Tensor Core API)☆104Updated last year
- ☆176Updated last year
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆289Updated last week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆351Updated last week
- ☆115Updated 8 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆53Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆114Updated last year
- ☆139Updated 4 months ago
- Efficient Top-K implementation on the GPU☆185Updated 6 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆263Updated last month
- Step-by-step optimization of CUDA SGEMM☆373Updated 3 years ago
- Fastest kernels written from scratch☆323Updated 5 months ago
- ☆150Updated last year
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆201Updated 3 years ago
- ☆88Updated 10 months ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆290Updated 2 months ago
- Shared Middle-Layer for Triton Compilation☆285Updated last week
- Github mirror of trition-lang/triton repo.☆66Updated this week
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆98Updated 7 years ago