adelmanm / approxLinks
Code for the paper "Faster Neural Network Training with Approximate Tensor Operations"
☆10Updated 4 years ago
Alternatives and similar repositories for approx
Users that are interested in approx are comparing it to the libraries listed below
Sorting:
- Block Sparse movement pruning☆83Updated 5 years ago
- ICLR 2021☆48Updated 4 years ago
- Official implementation of Neurips 2020 "Sparse Weight Activation Training" paper.☆29Updated 4 years ago
- Pytorch library for factorized L0-based pruning.☆45Updated 2 years ago
- Differentiable Product Quantization for End-to-End Embedding Compression.☆64Updated 3 years ago
- Learning Accurate Decision Trees with Bandit Feedback via Quantized Gradient Descent☆17Updated 3 years ago
- Single shot neural network pruning before training the model, based on connection sensitivity☆11Updated 6 years ago
- ☆17Updated 5 years ago
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆142Updated 4 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 4 years ago
- Code for L0-ARM: Network Sparsification via Stochastic Binary Optimization☆15Updated 6 years ago
- A Learnable LSH Framework for Efficient NN Training☆34Updated 4 years ago
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆64Updated last year
- PyTorch implementation of HashedNets☆38Updated 2 years ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆70Updated 4 years ago
- ☆11Updated 2 years ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆59Updated 2 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 4 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆74Updated 5 years ago
- ☆22Updated 5 years ago
- Johnson-Lindenstrauss transform (JLT), random projections (RP), fast Johnson-Lindenstrauss transform (FJLT), and randomized Hadamard tran…☆21Updated 2 years ago
- [NeurIPS 2022] DataMUX: Data Multiplexing for Neural Networks☆60Updated 3 years ago
- ☆21Updated last year
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆149Updated last year
- [JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion☆41Updated 4 years ago
- Sparsity support for PyTorch☆38Updated 10 months ago
- [KDD'22] Learned Token Pruning for Transformers☆102Updated 2 years ago
- ☆222Updated 2 years ago
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆32Updated 2 years ago
- Block-sparse primitives for PyTorch☆158Updated 4 years ago