marsupialtail / sparsednn
Fast sparse deep learning on CPUs
☆53Updated 2 years ago
Alternatives and similar repositories for sparsednn:
Users that are interested in sparsednn are comparing it to the libraries listed below
- ☆157Updated last year
- System for automated integration of deep learning backends.☆47Updated 2 years ago
- ☆69Updated 2 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆103Updated 9 months ago
- Research and development for optimizing transformers☆125Updated 4 years ago
- GEMM and Winograd based convolutions using CUTLASS☆26Updated 4 years ago
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆90Updated 6 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆130Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆134Updated 2 years ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆108Updated 4 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆113Updated 4 months ago
- Home for OctoML PyTorch Profiler☆112Updated last year
- Customized matrix multiplication kernels☆54Updated 3 years ago
- llama INT4 cuda inference with AWQ☆54Updated 2 months ago
- ☆197Updated 9 months ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆51Updated 7 years ago
- ☆50Updated last year
- ☆142Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 7 months ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- ☆103Updated 7 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆119Updated 2 years ago
- This repository contains integer operators on GPUs for PyTorch.☆201Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆91Updated 2 weeks ago
- play gemm with tvm☆90Updated last year
- Benchmark PyTorch Custom Operators☆14Updated last year
- ☆68Updated 3 weeks ago