MXHX7199 / ICCV_2021_AFPLinks
AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.
☆13Updated 3 years ago
Alternatives and similar repositories for ICCV_2021_AFP
Users that are interested in ICCV_2021_AFP are comparing it to the libraries listed below
Sorting:
- bitfusion verilog implementation☆10Updated 3 years ago
- ☆18Updated 2 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- ☆27Updated 3 months ago
- Open-source of MSD framework☆16Updated last year
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆23Updated last year
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆49Updated last year
- ☆29Updated this week
- Here are some implementations of basic hardware units in RTL language (verilog for now), which can be used for area/power evaluation and …☆11Updated last year
- Sparse CNN Accelerator targeting Intel FPGA☆12Updated 3 years ago
- FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations☆94Updated 3 years ago
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆25Updated 2 years ago
- ☆19Updated 3 years ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆109Updated 2 years ago
- ☆49Updated 3 years ago
- ☆46Updated 7 months ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆39Updated 2 years ago
- ☆35Updated 5 years ago
- A DAG processor and compiler for a tree-based spatial datapath.☆13Updated 2 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Updated 3 years ago
- DeiT implementation for Q-ViT☆25Updated 2 months ago
- Neural Network Quantization With Fractional Bit-widths☆12Updated 4 years ago
- The official implementation of HPCA 2025 paper, Prosperity: Accelerating Spiking Neural Networks via Product Sparsity☆33Updated 5 months ago
- ☆103Updated last year
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated last year
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28Updated 3 years ago
- Codes for our paper "Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment" [NeurIPS'19 EMC2 workshop]…☆11Updated 4 years ago
- A co-design architecture on sparse attention☆52Updated 3 years ago
- MNSIM_Python_v1.0. The former circuits-level version link: https://github.com/Zhu-Zhenhua/MNSIM_V1.1☆34Updated last year
- ☆71Updated 5 years ago