marsupialtail / sparsednnLinks
Fast sparse deep learning on CPUs
☆56Updated 3 years ago
Alternatives and similar repositories for sparsednn
Users that are interested in sparsednn are comparing it to the libraries listed below
Sorting:
- Research and development for optimizing transformers☆131Updated 4 years ago
- ☆158Updated 2 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆136Updated 3 years ago
- GEMM and Winograd based convolutions using CUTLASS☆28Updated 5 years ago
- ☆68Updated 2 years ago
- Customized matrix multiplication kernels☆57Updated 3 years ago
- A library of GPU kernels for sparse matrix operations.☆273Updated 4 years ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆50Updated 7 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆116Updated last year
- System for automated integration of deep learning backends.☆47Updated 3 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆121Updated 10 months ago
- ☆50Updated last year
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆139Updated 2 years ago
- llama INT4 cuda inference with AWQ☆55Updated 8 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆111Updated 10 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆101Updated 7 years ago
- ☆145Updated 8 months ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- Lightweight and Parallel Deep Learning Framework☆264Updated 2 years ago
- Training material for IPU users: tutorials, feature examples, simple applications☆87Updated 2 years ago
- pytorch-profiler☆51Updated 2 years ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆45Updated last month
- A Winograd Minimal Filter Implementation in CUDA☆28Updated 4 years ago
- Benchmarks to capture important workloads.☆31Updated 8 months ago
- ☆72Updated 6 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 2 months ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 7 years ago
- Benchmark scripts for TVM☆74Updated 3 years ago