ConstantPark / Neural-Network-Acceleration-3Links
☆13Updated 4 years ago
Alternatives and similar repositories for Neural-Network-Acceleration-3
Users that are interested in Neural-Network-Acceleration-3 are comparing it to the libraries listed below
Sorting:
- ☆23Updated 3 years ago
- Neural Network Acceleration using CPU/GPU, ASIC, FPGA☆63Updated 5 years ago
- Neural Network Acceleration such as ASIC, FPGA, GPU, and PIM☆53Updated 5 years ago
- TQT's pytorch implementation.☆21Updated 3 years ago
- Neural Network Quantization With Fractional Bit-widths☆12Updated 4 years ago
- ☆14Updated 5 years ago
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆20Updated 5 years ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Updated 3 years ago
- This is an implementation of YOLO using LSQ network quantization method.☆23Updated 3 years ago
- Designs for finalist teams of the DAC System Design Contest☆37Updated 5 years ago
- DAC System Design Contest 2020☆29Updated 5 years ago
- A DAG processor and compiler for a tree-based spatial datapath.☆14Updated 3 years ago
- Static Block Floating Point Quantization for CNN☆35Updated 4 years ago
- ☆19Updated 4 years ago
- ☆10Updated last year
- 2020 xilinx summer school☆18Updated 5 years ago
- ☆16Updated 4 years ago
- ☆10Updated 9 months ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- ☆28Updated 3 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆26Updated last year
- ☆20Updated 3 years ago
- ☆35Updated 6 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Updated 3 years ago
- ☆26Updated 2 years ago
- ☆71Updated 5 years ago
- BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing☆141Updated 5 years ago
- HLS implemented systolic array structure☆41Updated 7 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆42Updated 4 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆52Updated last year