SKKU-ESLAB / CNN-on-flashLinks
CNN functions for dense matrices resident in flash storage
☆23Updated 6 years ago
Alternatives and similar repositories for CNN-on-flash
Users that are interested in CNN-on-flash are comparing it to the libraries listed below
Sorting:
- arm compute library implementation of efficient low precision neural network☆25Updated 5 years ago
- ANT framework's model database that provides DNN models for the various range of IoT devices☆17Updated this week
- Virtual Connection: Framework for P2P Communication Abstraction☆23Updated 5 years ago
- Automatic DNN compression tool with various model compression and neural architecture search techniques☆21Updated 9 months ago
- ANT (AI-based Networked Things) Framework☆27Updated 9 months ago
- Enhanced version of IoT.js for ANT Framework - Platform for Internet of Things with JavaScript☆15Updated 5 years ago
- Lightweight C implementation of CNNs for Embedded Systems☆62Updated 2 years ago
- Study Group of Deep Learning Compiler☆166Updated 2 years ago
- ☆29Updated 4 years ago
- C implementation of Open Neural Network Exchange Runtime☆33Updated 3 years ago
- IoT.js of ANT based on Tizen RT☆14Updated 5 years ago
- Neural Network Acceleration using CPU/GPU, ASIC, FPGA☆63Updated 5 years ago
- NNtrainer is Software Framework for Training and Inferencing Neural Network Models on Devices.☆192Updated this week
- Neural Network Acceleration such as ASIC, FPGA, GPU, and PIM☆54Updated 5 years ago
- BlockCIrculantRNN (LSTM and GRU) using TensorFlow☆14Updated 7 years ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 7 years ago
- Convert single precision float to bfloat16 (Brain Floating Point) floating-point format☆14Updated 6 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated 2 years ago
- Implementation of convolution layer in different flavors☆68Updated 8 years ago
- nnq_cnd_study stands for Neural Network Quantization & Compact Networks Design Study☆13Updated 5 years ago
- A Winograd Minimal Filter Implementation in CUDA☆28Updated 4 years ago
- ☆33Updated 2 years ago
- Converting a deep neural network to integer-only inference in native C via uniform quantization and the fixed-point representation.☆26Updated 3 years ago
- ONNX Parser is a tool that automatically generates openvx inference code (CNN) from onnx binary model files.☆18Updated 7 years ago
- ☆14Updated 9 months ago
- Parse TFLite models (*.tflite) EASILY with Python. Check the API at https://zhenhuaw.me/tflite/docs/☆104Updated 11 months ago
- XLA integration of Open Neural Network Exchange (ONNX)☆19Updated 7 years ago
- NNCG: A Neural Network Code Generator