wangmaolin / niti
Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv
☆80Updated 2 years ago
Alternatives and similar repositories for niti:
Users that are interested in niti are comparing it to the libraries listed below
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- Simulator for BitFusion☆95Updated 4 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- ☆69Updated 4 years ago
- BNNs (XNOR, BNN and DoReFa) implementation for PyTorch 1.0+☆39Updated last year
- Approximate layers - TensorFlow extension☆27Updated 10 months ago
- Accelergy is an energy estimation infrastructure for accelerator energy estimations☆134Updated last week
- Reproduction of WAGE in PyTorch.☆41Updated 6 years ago
- DNN quantization with outlier channel splitting☆112Updated 4 years ago
- A collection of research papers on efficient training of DNNs☆70Updated 2 years ago
- pytorch fixed point training tool/framework☆34Updated 4 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆96Updated 3 years ago
- ☆32Updated 4 years ago
- Low Precision Arithmetic Simulation in PyTorch☆272Updated 9 months ago
- Torch2Chip (MLSys, 2024)☆51Updated 3 weeks ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆73Updated 5 years ago
- This repository containts the pytorch scripts to train mixed-precision networks for microcontroller deployment, based on the memory contr…☆49Updated 9 months ago
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆121Updated last year
- ☆91Updated last year
- PyTorch implementation of "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆55Updated 5 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆106Updated 3 months ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆94Updated 2 years ago
- BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing☆132Updated 5 years ago
- ☆75Updated 2 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- This is a collection of works on neural networks and neural accelerators.☆40Updated 6 years ago
- Tool for optimize CNN blocking☆93Updated 4 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆82Updated 6 months ago
- The official, proof-of-concept C++ implementation of PocketNN.☆32Updated 8 months ago
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆12Updated 3 years ago