google-research / sputnikLinks
A library of GPU kernels for sparse matrix operations.
☆270Updated 4 years ago
Alternatives and similar repositories for sputnik
Users that are interested in sputnik are comparing it to the libraries listed below
Sorting:
- Training neural networks in TensorFlow 2.0 with 5x less memory☆132Updated 3 years ago
- Assembler for NVIDIA Volta and Turing GPUs☆224Updated 3 years ago
- ☆216Updated last year
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆51Updated 7 years ago
- ☆102Updated last year
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆138Updated 4 years ago
- Code for paper "Design Principles for Sparse Matrix Multiplication on the GPU" accepted to Euro-Par 2018☆72Updated 4 years ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆343Updated this week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆187Updated this week
- ☆145Updated 5 months ago
- CUDA Matrix Multiplication Optimization☆201Updated 11 months ago
- An extension library of WMMA API (Tensor Core API)☆99Updated last year
- ☆169Updated last year
- Shared Middle-Layer for Triton Compilation☆258Updated this week
- Block-sparse primitives for PyTorch☆157Updated 4 years ago
- ☆94Updated 6 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆112Updated last year
- ☆123Updated 2 months ago
- Research and development for optimizing transformers☆129Updated 4 years ago
- ☆149Updated 11 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆52Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆257Updated 3 weeks ago
- ☆83Updated 8 months ago
- MLIR-based partitioning system☆103Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆255Updated 8 months ago
- A schedule language for large model training☆149Updated last year
- Ahead of Time (AOT) Triton Math Library☆70Updated this week
- ☆50Updated last year