marsupialtail / sparsednn
Fast sparse deep learning on CPUs
☆53Updated 2 years ago
Alternatives and similar repositories for sparsednn:
Users that are interested in sparsednn are comparing it to the libraries listed below
- Research and development for optimizing transformers☆126Updated 4 years ago
- ☆69Updated 2 years ago
- ☆158Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆106Updated 9 months ago
- Benchmarks to capture important workloads.☆31Updated 3 months ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago
- ☆145Updated 2 years ago
- Memory Optimizations for Deep Learning (ICML 2023)☆64Updated last year
- GEMM and Winograd based convolutions using CUTLASS☆26Updated 4 years ago
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆91Updated 6 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆109Updated 5 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆135Updated 2 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆84Updated last week
- ☆16Updated last year
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 2 years ago
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- ☆142Updated 3 months ago
- llama INT4 cuda inference with AWQ☆54Updated 3 months ago
- ☆50Updated last year
- ☆193Updated 2 years ago
- ☆205Updated 5 months ago
- A library of GPU kernels for sparse matrix operations.☆264Updated 4 years ago
- This repository contains integer operators on GPUs for PyTorch.☆204Updated last year
- ☆202Updated 9 months ago
- The quantitative performance comparison among DL compilers on CNN models.☆74Updated 4 years ago
- Customized matrix multiplication kernels☆54Updated 3 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆36Updated 2 months ago
- This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as …☆194Updated 2 years ago