marsupialtail / sparsednnLinks
Fast sparse deep learning on CPUs
☆54Updated 2 years ago
Alternatives and similar repositories for sparsednn
Users that are interested in sparsednn are comparing it to the libraries listed below
Sorting:
- ☆158Updated last year
- ☆69Updated 2 years ago
- Research and development for optimizing transformers☆129Updated 4 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆132Updated 3 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆113Updated last year
- ☆144Updated 6 months ago
- System for automated integration of deep learning backends.☆47Updated 2 years ago
- llama INT4 cuda inference with AWQ☆54Updated 6 months ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆51Updated 7 years ago
- GEMM and Winograd based convolutions using CUTLASS☆26Updated 5 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆137Updated 2 years ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆96Updated 7 years ago
- A library of GPU kernels for sparse matrix operations.☆270Updated 4 years ago
- Customized matrix multiplication kernels☆56Updated 3 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 8 months ago
- ☆50Updated last year
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆93Updated last month
- ☆85Updated 9 months ago
- Ahead of Time (AOT) Triton Math Library☆75Updated this week
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- ☆154Updated 2 years ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 4 months ago
- A Winograd Minimal Filter Implementation in CUDA☆25Updated 3 years ago
- oneCCL Bindings for Pytorch*☆99Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆260Updated 3 weeks ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆118Updated 8 months ago
- A schedule language for large model training☆149Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆79Updated 2 weeks ago
- MLIR-based partitioning system☆115Updated this week