marsupialtail / sparsednnLinks
Fast sparse deep learning on CPUs
☆56Updated 2 years ago
Alternatives and similar repositories for sparsednn
Users that are interested in sparsednn are comparing it to the libraries listed below
Sorting:
- ☆159Updated 2 years ago
- Research and development for optimizing transformers☆130Updated 4 years ago
- A library of GPU kernels for sparse matrix operations.☆271Updated 4 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆114Updated last year
- ☆69Updated 2 years ago
- GEMM and Winograd based convolutions using CUTLASS☆27Updated 5 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆134Updated 3 years ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆98Updated 7 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆111Updated 9 months ago
- Training material for IPU users: tutorials, feature examples, simple applications☆87Updated 2 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆51Updated 7 years ago
- llama INT4 cuda inference with AWQ☆54Updated 8 months ago
- ☆144Updated 7 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 2 months ago
- System for automated integration of deep learning backends.☆47Updated 3 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆40Updated 6 months ago
- ☆50Updated last year
- ☆158Updated 2 years ago
- ☆111Updated last year
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆45Updated last month
- A Python library transfers PyTorch tensors between CPU and NVMe☆121Updated 9 months ago
- ☆231Updated last year
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆64Updated last year
- extensible collectives library in triton☆87Updated 5 months ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- Customized matrix multiplication kernels☆56Updated 3 years ago
- Framework to reduce autotune overhead to zero for well known deployments.☆82Updated this week
- PyTorch RFCs (experimental)☆135Updated 3 months ago