pulp-platform / quantlibLinks
A library to train and deploy quantised Deep Neural Networks
☆24Updated 6 months ago
Alternatives and similar repositories for quantlib
Users that are interested in quantlib are comparing it to the libraries listed below
Sorting:
- Floating-Point Optimized On-Device Learning Library for the PULP Platform.☆34Updated last month
- ☆36Updated last year
- DNN Compiler for Heterogeneous SoCs☆39Updated last week
- A tool to deploy Deep Neural Networks on PULP-based SoC's☆80Updated 4 months ago
- FlexASR: A Reconfigurable Hardware Accelerator for Attention-based Seq-to-Seq Networks☆46Updated 4 months ago
- ☆82Updated last year
- Provides the hardware code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerator…☆24Updated 4 years ago
- ☆47Updated 2 months ago
- ReckOn: A Spiking RNN Processor Enabling On-Chip Learning over Second-Long Timescales - HDL source code and documentation.☆84Updated 3 years ago
- BARVINN: A Barrel RISC-V Neural Network Accelerator: https://barvinn.readthedocs.io/en/latest/☆88Updated 5 months ago
- Adaptive floating-point based numerical format for resilient deep learning☆14Updated 3 years ago
- SAMO: Streaming Architecture Mapping Optimisation☆33Updated last year
- ☆35Updated 3 months ago
- ☆33Updated 6 years ago
- Algorithmic C Machine Learning Library☆23Updated 6 months ago
- This repository contains the results and code for the MLPerf™ Tiny Inference v0.7 benchmark.☆18Updated 2 years ago
- NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions☆37Updated 2 months ago
- FPGA-based hardware acceleration for dropout-based Bayesian Neural Networks.☆24Updated last year
- ☆58Updated 5 years ago
- An Open Workflow to Build Custom SoCs and run Deep Models at the Edge☆81Updated last month
- Fork of upstream onnxruntime focused on supporting risc-v accelerators☆87Updated 2 years ago
- HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators☆151Updated last week
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆54Updated this week
- Converting a deep neural network to integer-only inference in native C via uniform quantization and the fixed-point representation.☆25Updated 3 years ago
- SAURIA (Systolic-Array tensor Unit for aRtificial Intelligence Acceleration) is an open-source Convolutional Neural Network accelerator b…☆46Updated 8 months ago
- Fully opensource spiking neural network accelerator☆152Updated 2 years ago
- Benchmark framework of 3D integrated CIM accelerators for popular DNN inference, support both monolithic and heterogeneous 3D integration☆22Updated 3 years ago
- Quantized ResNet50 Dataflow Acceleration on Alveo, with PYNQ☆57Updated 3 years ago
- Torch2Chip (MLSys, 2024)☆52Updated 2 months ago
- ☆30Updated 7 months ago