wangmaolin / nitiLinks
Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv
☆84Updated 2 years ago
Alternatives and similar repositories for niti
Users that are interested in niti are comparing it to the libraries listed below
Sorting:
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- Simulator for BitFusion☆100Updated 4 years ago
- A collection of research papers on efficient training of DNNs☆70Updated 2 years ago
- ☆36Updated 6 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆97Updated 4 years ago
- BNNs (XNOR, BNN and DoReFa) implementation for PyTorch 1.0+☆41Updated 2 years ago
- CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge Devices☆43Updated 5 years ago
- Approximate layers - TensorFlow extension☆27Updated 2 months ago
- Low Precision Arithmetic Simulation in PyTorch☆279Updated last year
- Quantization of Convolutional Neural networks.☆244Updated 10 months ago
- Accelergy is an energy estimation infrastructure for accelerator energy estimations☆143Updated last month
- XNOR-Net, with binary gemm and binary conv2d kernels, support both CPU and GPU.☆85Updated 6 years ago
- [FPGA'21] CoDeNet is an efficient object detection model on PyTorch, with SOTA performance on VOC and COCO based on CenterNet and Co-Desi…☆25Updated 2 years ago
- BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing☆139Updated 5 years ago
- DNN quantization with outlier channel splitting☆113Updated 5 years ago
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆53Updated 2 months ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 6 months ago
- ☆71Updated 5 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Updated 3 years ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆22Updated last year
- This is a collection of works on neural networks and neural accelerators.☆40Updated 6 years ago
- A tool to deploy Deep Neural Networks on PULP-based SoC's☆80Updated 4 months ago
- Adaptive floating-point based numerical format for resilient deep learning☆14Updated 3 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- GoldenEye is a functional simulator with fault injection capabilities for common and emerging numerical formats, implemented for the PyTo…☆25Updated 8 months ago
- Explore the energy-efficient dataflow scheduling for neural networks.☆225Updated 4 years ago
- pytorch fixed point training tool/framework☆34Updated 4 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 3 years ago