jaewoosong / pocketnn
The official, proof-of-concept C++ implementation of PocketNN.
☆32Updated 9 months ago
Alternatives and similar repositories for pocketnn:
Users that are interested in pocketnn are comparing it to the libraries listed below
- Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv☆80Updated 2 years ago
- GEMM and Winograd based convolutions using CUTLASS☆26Updated 4 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆107Updated 3 months ago
- Converting a deep neural network to integer-only inference in native C via uniform quantization and the fixed-point representation.☆23Updated 3 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- A Winograd Minimal Filter Implementation in CUDA☆24Updated 3 years ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆50Updated 7 years ago
- A tool to deploy Deep Neural Networks on PULP-based SoC's☆81Updated last month
- Implementation of convolution layer in different flavors☆68Updated 7 years ago
- ☆141Updated 2 years ago
- This repository containts the pytorch scripts to train mixed-precision networks for microcontroller deployment, based on the memory contr…☆49Updated 10 months ago
- ColTraIn HBFP Training Emulator☆16Updated 2 years ago
- ☆12Updated 3 years ago
- Approximate layers - TensorFlow extension☆27Updated 11 months ago
- Adaptive floating-point based numerical format for resilient deep learning☆14Updated 2 years ago
- ☆10Updated 3 years ago
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆13Updated 3 years ago
- Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules☆40Updated 2 years ago
- TQT's pytorch implementation.☆21Updated 3 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago
- Fork of upstream onnxruntime focused on supporting risc-v accelerators☆83Updated 2 years ago
- INT-Q Extension of the CMSIS-NN library for ARM Cortex-M target☆18Updated 5 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆96Updated 3 years ago
- Customized matrix multiplication kernels☆53Updated 3 years ago
- CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge Devices☆41Updated 5 years ago
- A collection of research papers on efficient training of DNNs☆70Updated 2 years ago
- Code for ICML 2021 submission☆35Updated 4 years ago
- ☆27Updated 4 years ago
- Fast matrix multiplication for few-bit integer matrices on CPUs.☆27Updated 6 years ago