google-research / sputnik
A library of GPU kernels for sparse matrix operations.
☆264Updated 4 years ago
Alternatives and similar repositories for sputnik:
Users that are interested in sputnik are comparing it to the libraries listed below
- ☆202Updated 9 months ago
- Assembler for NVIDIA Volta and Turing GPUs☆218Updated 3 years ago
- ☆96Updated last year
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆51Updated 7 years ago
- ☆165Updated 10 months ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago
- A tensor-aware point-to-point communication primitive for machine learning☆257Updated 2 years ago
- An extension library of WMMA API (Tensor Core API)☆96Updated 9 months ago
- ☆78Updated 6 months ago
- CUDA Matrix Multiplication Optimization☆184Updated 9 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆135Updated 2 years ago
- ☆104Updated last month
- Code for paper "Design Principles for Sparse Matrix Multiplication on the GPU" accepted to Euro-Par 2018☆71Updated 4 years ago
- ☆142Updated 3 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆346Updated 7 months ago
- A simple high performance CUDA GEMM implementation.☆367Updated last year
- ☆106Updated 3 years ago
- Stores documents and resources used by the OpenXLA developer community☆121Updated 9 months ago
- ☆70Updated 4 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆91Updated 6 years ago
- extensible collectives library in triton☆86Updated last month
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆324Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆106Updated 9 months ago
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆130Updated last year
- Step-by-step optimization of CUDA SGEMM☆315Updated 3 years ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆221Updated 3 weeks ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆87Updated 2 years ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆131Updated 4 years ago
- Shared Middle-Layer for Triton Compilation☆246Updated 2 weeks ago
- Research and development for optimizing transformers☆126Updated 4 years ago