GATECH-EIC / Auto-NBALinks
[ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yongan Zhang, Yang Zhang, David Cox, Yingyan Lin
☆16Updated 3 years ago
Alternatives and similar repositories for Auto-NBA
Users that are interested in Auto-NBA are comparing it to the libraries listed below
Sorting:
- ☆19Updated 4 years ago
- ☆30Updated 6 months ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆13Updated 8 months ago
- Neural Network Quantization With Fractional Bit-widths☆12Updated 4 years ago
- SAMO: Streaming Architecture Mapping Optimisation☆34Updated 2 years ago
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆25Updated 3 years ago
- A DAG processor and compiler for a tree-based spatial datapath.☆14Updated 3 years ago
- Training with Block Minifloat number representation☆16Updated 4 years ago
- ☆71Updated 5 years ago
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆20Updated 6 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆42Updated 4 years ago
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28Updated 3 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆26Updated last year
- ☆35Updated 5 years ago
- Designs for finalist teams of the DAC System Design Contest☆37Updated 5 years ago
- Static Block Floating Point Quantization for CNN☆36Updated 4 years ago
- FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations☆94Updated 4 years ago
- ☆72Updated 2 years ago
- ☆18Updated 2 years ago
- The code for Joint Neural Architecture Search and Quantization☆13Updated 6 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated 2 years ago
- ☆25Updated 2 years ago
- ☆10Updated 10 months ago
- MaxEVA: Maximizing the Efficiency of Matrix Multiplication on Versal AI Engine (accepted as full paper at FPT'23)☆21Updated last year
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆52Updated last year
- A general framework for optimizing DNN dataflow on systolic array☆38Updated 4 years ago
- Adaptive floating-point based numerical format for resilient deep learning☆14Updated 3 years ago
- ☆23Updated 3 years ago
- [FPGA-2022] N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores☆12Updated 3 years ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Updated 3 years ago