wangmaolin / nitiLinks
Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv
☆83Updated 2 years ago
Alternatives and similar repositories for niti
Users that are interested in niti are comparing it to the libraries listed below
Sorting:
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- ☆36Updated 6 years ago
- BNNs (XNOR, BNN and DoReFa) implementation for PyTorch 1.0+☆40Updated 2 years ago
- A collection of research papers on efficient training of DNNs☆70Updated 2 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆97Updated 3 years ago
- Approximate layers - TensorFlow extension☆27Updated last month
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- Simulator for BitFusion☆100Updated 4 years ago
- Reproduction of WAGE in PyTorch.☆42Updated 6 years ago
- ☆76Updated 2 years ago
- XNOR-Net, with binary gemm and binary conv2d kernels, support both CPU and GPU.☆85Updated 6 years ago
- Accelergy is an energy estimation infrastructure for accelerator energy estimations☆138Updated last week
- Quantization of Convolutional Neural networks.☆244Updated 10 months ago
- [ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yinin…☆31Updated last year
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- Torch2Chip (MLSys, 2024)☆51Updated 2 months ago
- TBNv2: Convolutional Neural Network With Ternary Inputs and Binary Weights☆17Updated 5 years ago
- code for the paper "A Statistical Framework for Low-bitwidth Training of Deep Neural Networks"☆28Updated 4 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 6 months ago
- DNN quantization with outlier channel splitting☆113Updated 5 years ago
- Binarize convolutional neural networks using pytorch☆146Updated 3 years ago
- ☆71Updated 5 years ago
- Low Precision Arithmetic Simulation in PyTorch☆278Updated last year
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 3 years ago
- Adaptive floating-point based numerical format for resilient deep learning☆14Updated 3 years ago
- ☆149Updated 2 years ago
- CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge Devices☆43Updated 5 years ago
- Any-Precision Deep Neural Networks (AAAI 2021)☆60Updated 5 years ago
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆135Updated 2 years ago
- Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks☆69Updated 3 years ago