GATECH-EIC / Auto-NBA
[ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yongan Zhang, Yang Zhang, David Cox, Yingyan Lin
☆15Updated 3 years ago
Alternatives and similar repositories for Auto-NBA
Users that are interested in Auto-NBA are comparing it to the libraries listed below
Sorting:
- ☆26Updated last month
- ☆19Updated 4 years ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆13Updated 3 months ago
- A DAG processor and compiler for a tree-based spatial datapath.☆13Updated 2 years ago
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆23Updated 2 years ago
- ☆34Updated 4 years ago
- ☆18Updated 2 years ago
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28Updated 3 years ago
- Neural Network Quantization With Fractional Bit-widths☆12Updated 4 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆22Updated last year
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆19Updated 5 years ago
- ☆70Updated 5 years ago
- Approximate layers - TensorFlow extension☆27Updated last month
- SAMO: Streaming Architecture Mapping Optimisation☆32Updated last year
- ☆10Updated 5 months ago
- ☆13Updated 4 years ago
- Designs for finalist teams of the DAC System Design Contest☆37Updated 4 years ago
- A general framework for optimizing DNN dataflow on systolic array☆35Updated 4 years ago
- ☆23Updated 2 years ago
- ☆23Updated 3 years ago
- TBNv2: Convolutional Neural Network With Ternary Inputs and Binary Weights☆17Updated 5 years ago
- Training with Block Minifloat number representation☆14Updated 4 years ago
- MaxEVA: Maximizing the Efficiency of Matrix Multiplication on Versal AI Engine (accepted as full paper at FPT'23)☆20Updated last year
- Implementation of Input Stationary, Weight Stationary and Output Stationary dataflow for given neural network on a tiled architecture☆9Updated 5 years ago
- ☆33Updated 3 years ago
- ☆71Updated 2 years ago
- ☆40Updated 10 months ago
- A reference implementation of the Mind Mappings Framework.☆29Updated 3 years ago
- The code for Joint Neural Architecture Search and Quantization☆13Updated 6 years ago
- FlexASR: A Reconfigurable Hardware Accelerator for Attention-based Seq-to-Seq Networks☆46Updated 2 months ago