google-research / sputnikLinks
A library of GPU kernels for sparse matrix operations.
☆280Updated 5 years ago
Alternatives and similar repositories for sputnik
Users that are interested in sputnik are comparing it to the libraries listed below
Sorting:
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Updated 3 years ago
- Assembler for NVIDIA Volta and Turing GPUs☆236Updated 3 years ago
- ☆254Updated last year
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆146Updated 5 years ago
- CUDA Matrix Multiplication Optimization☆247Updated last year
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆50Updated 7 years ago
- ☆145Updated 11 months ago
- ☆110Updated last year
- An extension library of WMMA API (Tensor Core API)☆109Updated last year
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆141Updated 2 years ago
- ☆186Updated last year
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆143Updated this week
- Code for paper "Design Principles for Sparse Matrix Multiplication on the GPU" accepted to Euro-Par 2018☆73Updated 5 years ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆368Updated this week
- Github mirror of trition-lang/triton repo.☆111Updated this week
- ☆164Updated last year
- Research and development for optimizing transformers☆131Updated 4 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆123Updated last year
- ☆165Updated 7 months ago
- Step-by-step optimization of CUDA SGEMM☆416Updated 3 years ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆56Updated 2 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆90Updated 3 years ago
- ☆152Updated last year
- Efficient Top-K implementation on the GPU☆192Updated 6 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆104Updated 6 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆277Updated 5 months ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆329Updated 6 months ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- Kernel Tuner☆377Updated last week
- extensible collectives library in triton☆91Updated 9 months ago