ptillet / torch-blocksparseLinks
Block-sparse primitives for PyTorch
☆160Updated 4 years ago
Alternatives and similar repositories for torch-blocksparse
Users that are interested in torch-blocksparse are comparing it to the libraries listed below
Sorting:
- Butterfly matrix multiplication in PyTorch☆174Updated 2 years ago
- Research and development for optimizing transformers☆130Updated 4 years ago
- A library of GPU kernels for sparse matrix operations.☆273Updated 4 years ago
- ☆218Updated 2 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆135Updated 3 years ago
- Low Precision Arithmetic Simulation in PyTorch☆285Updated last year
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆50Updated 7 years ago
- Implementation of a Transformer, but completely in Triton☆275Updated 3 years ago
- ☆177Updated last year
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆326Updated 2 years ago
- Customized matrix multiplication kernels☆56Updated 3 years ago
- Fast Block Sparse Matrices for Pytorch☆547Updated 4 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆301Updated 3 months ago
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆335Updated last year
- CUDA kernels for generalized matrix-multiplication in PyTorch☆85Updated 3 years ago
- A GPU performance profiling tool for PyTorch models☆505Updated 4 years ago
- PyTorch implementation of L2L execution algorithm☆108Updated 2 years ago
- ☆158Updated 2 years ago
- Structured matrices for compressing neural networks☆67Updated 2 years ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆139Updated 4 years ago
- PyTorch interface for the IPU☆181Updated last year
- ☆113Updated last year
- Distributed K-FAC preconditioner for PyTorch☆90Updated this week
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆132Updated 2 years ago
- This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as …☆194Updated 3 years ago
- Torch Distributed Experimental☆117Updated last year
- MONeT framework for reducing memory consumption of DNN training☆174Updated 4 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- PyTorch RFCs (experimental)☆135Updated 4 months ago
- ☆253Updated last year