jaewoosong / pocketnnLinks
The official, proof-of-concept C++ implementation of PocketNN.
☆35Updated 3 months ago
Alternatives and similar repositories for pocketnn
Users that are interested in pocketnn are comparing it to the libraries listed below
Sorting:
- Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv☆89Updated 3 years ago
- ☆168Updated 2 years ago
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆170Updated this week
- Reference implementations of popular Binarized Neural Networks☆109Updated last week
- Converting a deep neural network to integer-only inference in native C via uniform quantization and the fixed-point representation.☆26Updated 3 years ago
- ☆29Updated 4 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated 2 years ago
- Highly optimized inference engine for Binarized Neural Networks☆251Updated this week
- Implementation of convolution layer in different flavors☆68Updated 8 years ago
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆82Updated 4 years ago
- Low Precision Arithmetic Simulation in PyTorch☆287Updated last year
- Header-only C library for Binary Neural Network Feedforward Inference (targeting small devices)☆48Updated 4 years ago
- Lightweight C implementation of CNNs for Embedded Systems☆62Updated 2 years ago
- Butterfly matrix multiplication in PyTorch☆178Updated 2 years ago
- A tool to deploy Deep Neural Networks on PULP-based SoC's☆91Updated 5 months ago
- NEural Minimizer for pytOrch☆47Updated last year
- bfloat16 dtype for numpy☆20Updated 2 years ago
- NNCG: A Neural Network Code Generator☆35Updated last year
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆50Updated 7 years ago
- Fast matrix multiplication for few-bit integer matrices on CPUs.☆28Updated 6 years ago
- GEMM and Winograd based convolutions using CUTLASS☆28Updated 5 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 6 years ago
- CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge Devices☆48Updated 5 years ago
- Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules☆43Updated 3 years ago
- ☆40Updated last year
- ☆15Updated 2 months ago
- Customized matrix multiplication kernels☆57Updated 3 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆75Updated 6 years ago
- ☆208Updated 4 years ago
- TFLite model analyzer & memory optimizer☆135Updated last year