ptillet / torch-blocksparse
Block-sparse primitives for PyTorch
☆154Updated 3 years ago
Alternatives and similar repositories for torch-blocksparse:
Users that are interested in torch-blocksparse are comparing it to the libraries listed below
- Fast Block Sparse Matrices for Pytorch☆546Updated 4 years ago
- Research and development for optimizing transformers☆125Updated 4 years ago
- Butterfly matrix multiplication in PyTorch☆168Updated last year
- ☆202Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆263Updated 2 years ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆50Updated 7 years ago
- CUDA kernels for generalized matrix-multiplication in PyTorch☆79Updated 3 years ago
- Low Precision Arithmetic Simulation in PyTorch☆273Updated 10 months ago
- Customized matrix multiplication kernels☆54Updated 3 years ago
- PyTorch implementation of L2L execution algorithm☆107Updated 2 years ago
- A library of GPU kernels for sparse matrix operations.☆259Updated 4 years ago
- ☆163Updated 9 months ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆129Updated 3 years ago
- Structured matrices for compressing neural networks☆66Updated last year
- Slicing a PyTorch Tensor Into Parallel Shards☆298Updated 3 years ago
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆252Updated 2 years ago
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆331Updated 8 months ago
- MONeT framework for reducing memory consumption of DNN training☆173Updated 3 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆147Updated 5 months ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆137Updated 4 years ago
- Easy-to-use AdaHessian optimizer (PyTorch)☆77Updated 4 years ago
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆320Updated 2 years ago
- ☆157Updated last year
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated last year
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆119Updated 3 years ago
- ☆141Updated last year
- [JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion☆40Updated 4 years ago
- ☆36Updated 3 months ago
- Torch Distributed Experimental☆115Updated 7 months ago