ConstantPark / Neural-Network-Acceleration-3Links
☆15Updated 5 years ago
Alternatives and similar repositories for Neural-Network-Acceleration-3
Users that are interested in Neural-Network-Acceleration-3 are comparing it to the libraries listed below
Sorting:
- Neural Network Acceleration using CPU/GPU, ASIC, FPGA☆63Updated 5 years ago
- Neural Network Acceleration such as ASIC, FPGA, GPU, and PIM☆54Updated 5 years ago
- ☆23Updated 4 years ago
- TQT's pytorch implementation.☆21Updated 4 years ago
- ☆42Updated 3 years ago
- DAC System Design Contest 2020☆29Updated 5 years ago
- Neural Network Quantization With Fractional Bit-widths☆11Updated 4 years ago
- Designs for finalist teams of the DAC System Design Contest☆37Updated 5 years ago
- ☆14Updated 5 years ago
- ☆19Updated 4 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆42Updated 5 years ago
- Static Block Floating Point Quantization for CNN☆37Updated 4 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Updated 4 years ago
- 2020 xilinx summer school☆19Updated 5 years ago
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆19Updated 6 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- ☆12Updated 2 years ago
- A collection of research papers on efficient training of DNNs☆70Updated 3 years ago
- ☆28Updated 4 years ago
- Simulator for BitFusion☆102Updated 5 years ago
- This is an implementation of YOLO using LSQ network quantization method.☆22Updated 3 years ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Updated 3 years ago
- Any-Precision Deep Neural Networks (AAAI 2021)☆62Updated 5 years ago
- BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing☆149Updated 6 years ago
- Conditional channel- and precision-pruning on neural networks☆72Updated 5 years ago
- nnq_cnd_study stands for Neural Network Quantization & Compact Networks Design Study☆13Updated 5 years ago
- ☆19Updated 3 years ago
- BitSplit Post-trining Quantization☆50Updated 4 years ago
- [FPGA'21] CoDeNet is an efficient object detection model on PyTorch, with SOTA performance on VOC and COCO based on CenterNet and Co-Desi…☆27Updated 2 years ago
- ☆71Updated 5 years ago