ciodar / deep-compressionLinks
PyTorch Lightning implementation of the paper Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. This repository allows to reproduce the main findings of the paper on MNIST and Imagenette datasets.
☆35Updated last year
Alternatives and similar repositories for deep-compression
Users that are interested in deep-compression are comparing it to the libraries listed below
Sorting:
- Differentiable Weightless Neural Networks☆31Updated 10 months ago
- Torch2Chip (MLSys, 2024)☆55Updated 9 months ago
- Binarize convolutional neural networks using pytorch☆149Updated 3 years ago
- Bibtex for Sparsity in Deep Learning paper (https://arxiv.org/abs/2102.00554) - open for pull requests☆46Updated 3 years ago
- ☆78Updated 3 years ago
- ☆13Updated 6 months ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Updated 2 years ago
- Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv☆90Updated 3 years ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆59Updated 2 years ago
- ☆17Updated 3 years ago
- Code for CHIP: CHannel Independence-based Pruning for Compact Neural Networks (NeruIPS 2021).☆39Updated 3 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Updated 2 years ago