Andrew-Tierno / QuantizedTransformerLinks
Implementation of a Quantized Transformer Model
☆19Updated 6 years ago
Alternatives and similar repositories for QuantizedTransformer
Users that are interested in QuantizedTransformer are comparing it to the libraries listed below
Sorting:
- Implementation of NeurIPS 2019 paper "Normalization Helps Training of Quantized LSTM"☆31Updated last year
- Implementation of ICLR 2018 paper "Loss-aware Weight Quantization of Deep Networks"☆26Updated 6 years ago
- Codes for AAAI2019 paper: Deep Neural Network Quantization via Layer-Wise Optimization using Limited Training Data☆41Updated 6 years ago
- source code of the paper: Robust Quantization: One Model to Rule Them All☆40Updated 2 years ago
- [KDD'22] Learned Token Pruning for Transformers☆100Updated 2 years ago
- Block Sparse movement pruning☆81Updated 4 years ago
- Open Source Neural Machine Translation in PyTorch☆17Updated 6 years ago
- ICML2019 Accepted Paper. Overcoming Multi-Model Forgetting☆14Updated 6 years ago
- ☆15Updated 5 years ago
- An 8bit automated quantization conversion tool for the pytorch (Post-training quantization based on KL divergence)☆32Updated 5 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆21Updated 4 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year
- The collection of training tricks of binarized neural networks.☆72Updated 4 years ago
- The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Natu…☆48Updated 4 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated last month
- Deep Neural Network Compression based on Student-Teacher Network☆14Updated 2 years ago
- In this repository, we explore model compression for transformer architectures via quantization. We specifically explore quantization awa…☆24Updated 4 years ago
- Codes for accepted paper "Cooperative Pruning in Cross-Domain Deep Neural Network Compression" in IJCAI 2019.☆12Updated 6 years ago
- BitSplit Post-trining Quantization☆50Updated 3 years ago
- 3rd place solution for NeurIPS 2019 MicroNet challenge☆35Updated 5 years ago
- Implementation of ICLR 2017 paper "Loss-aware Binarization of Deep Networks"☆18Updated 6 years ago
- Code for paper 'Minimizing FLOPs to Learn Efficient Sparse Representations' published at ICLR 2020☆19Updated 5 years ago
- Optimizing Deep Convolutional Neural Network with Ternarized Weights and High Accuracy☆16Updated 6 years ago
- This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.☆88Updated 2 years ago
- caffe implementation of single level quantization☆19Updated 6 years ago
- Example for applying Gaussian and Laplace clipping on activations of CNN.☆34Updated 6 years ago
- code for the paper "A Statistical Framework for Low-bitwidth Training of Deep Neural Networks"☆29Updated 5 years ago
- Codes for paper "Few Shot Network Compression via Cross Distillation", AAAI 2020.☆31Updated 5 years ago
- Zero-Shot Knowledge Distillation in Deep Networks in ICML2019☆49Updated 6 years ago
- ☆16Updated 6 years ago