wangmaolin / nitiLinks
Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv
☆84Updated 2 years ago
Alternatives and similar repositories for niti
Users that are interested in niti are comparing it to the libraries listed below
Sorting:
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- Low Precision Arithmetic Simulation in PyTorch☆279Updated last year
- ☆36Updated 6 years ago
- Training with Block Minifloat number representation☆16Updated 4 years ago
- XNOR-Net, with binary gemm and binary conv2d kernels, support both CPU and GPU.☆85Updated 6 years ago
- Approximate layers - TensorFlow extension☆27Updated 3 months ago
- A collection of research papers on efficient training of DNNs☆70Updated 3 years ago
- Simulator for BitFusion☆100Updated 4 years ago
- DNN quantization with outlier channel splitting☆113Updated 5 years ago
- BNNs (XNOR, BNN and DoReFa) implementation for PyTorch 1.0+☆42Updated 2 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆98Updated 4 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 3 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- pytorch fixed point training tool/framework☆34Updated 4 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 7 months ago
- The code for paper: Neuralpower: Predict and deploy energy-efficient convolutional neural networks☆21Updated 6 years ago
- ☆153Updated 2 years ago
- Binarize convolutional neural networks using pytorch☆145Updated 3 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago
- Explore the energy-efficient dataflow scheduling for neural networks.☆225Updated 4 years ago
- ☆71Updated 5 years ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆23Updated last year
- This repository containts the pytorch scripts to train mixed-precision networks for microcontroller deployment, based on the memory contr…☆50Updated last year
- Accelergy is an energy estimation infrastructure for accelerator energy estimations☆144Updated last month
- This is a collection of works on neural networks and neural accelerators.☆40Updated 6 years ago
- Conditional channel- and precision-pruning on neural networks☆72Updated 5 years ago
- ☆19Updated 3 years ago
- Any-Precision Deep Neural Networks (AAAI 2021)☆60Updated 5 years ago