pulp-platform / quantlabLinks
☆40Updated last year
Alternatives and similar repositories for quantlab
Users that are interested in quantlab are comparing it to the libraries listed below
Sorting:
- A tool to deploy Deep Neural Networks on PULP-based SoC's☆92Updated 5 months ago
- A library to train and deploy quantised Deep Neural Networks☆25Updated last year
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆54Updated last year
- DNN Compiler for Heterogeneous SoCs☆59Updated last week
- TensorCore Vector Processor for Deep Learning - Google Summer of Code Project☆24Updated 4 years ago
- Floating-Point Optimized On-Device Learning Library for the PULP Platform.☆39Updated last month
- FlexASR: A Reconfigurable Hardware Accelerator for Attention-based Seq-to-Seq Networks☆50Updated 10 months ago
- Tool for the deployment and analysis of TinyML applications on TFLM and MicroTVM backends☆33Updated this week
- ☆36Updated 4 years ago
- NeuraLUT-Assemble☆46Updated 4 months ago
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆62Updated 3 months ago
- Fork of upstream onnxruntime focused on supporting risc-v accelerators☆88Updated 2 years ago
- Provides the hardware code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerator…☆24Updated 5 years ago
- FRAME: Fast Roofline Analytical Modeling and Estimation☆39Updated 2 years ago
- A Spatial Accelerator Generation Framework for Tensor Algebra.☆60Updated 4 years ago
- ☆72Updated 2 years ago
- ☆35Updated 6 years ago
- ☆86Updated 2 years ago
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆25Updated 3 years ago
- A Toy-Purpose TPU Simulator☆21Updated last year
- ACM TODAES Best Paper Award, 2022☆32Updated 2 years ago
- ☆39Updated last week
- ☆36Updated this week
- A DSL for Systolic Arrays☆83Updated 7 years ago
- MaxEVA: Maximizing the Efficiency of Matrix Multiplication on Versal AI Engine (accepted as full paper at FPT'23)☆21Updated last year
- LCAI-TIHU SW is a software stack of the AI inference processor based on RISC-V☆23Updated 3 years ago
- ☆42Updated 9 months ago
- SAMO: Streaming Architecture Mapping Optimisation☆34Updated 2 years ago
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆19Updated 6 years ago
- PyTorch model to RTL flow for low latency inference☆131Updated last year