MXHX7199 / ICCV_2021_AFPLinks
AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.
☆12Updated 3 years ago
Alternatives and similar repositories for ICCV_2021_AFP
Users that are interested in ICCV_2021_AFP are comparing it to the libraries listed below
Sorting:
- bitfusion verilog implementation☆10Updated 3 years ago
- ☆18Updated 2 years ago
- ☆27Updated 2 months ago
- Open-source of MSD framework☆16Updated last year
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- Sparse CNN Accelerator targeting Intel FPGA☆12Updated 3 years ago
- ☆29Updated last week
- Here are some implementations of basic hardware units in RTL language (verilog for now), which can be used for area/power evaluation and …☆11Updated last year
- The official implementation of HPCA 2025 paper, Prosperity: Accelerating Spiking Neural Networks via Product Sparsity☆31Updated 4 months ago
- ☆35Updated 4 years ago
- LoAS: Fully Temporal-Parallel Dataflow for Dual-Sparse Spiking Neural Networks, MICRO 2024.☆11Updated 3 months ago
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated 11 months ago
- ☆33Updated 3 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆23Updated last year
- A co-design architecture on sparse attention☆52Updated 3 years ago
- DeiT implementation for Q-ViT☆25Updated 2 months ago
- MINT, Multiplier-less INTeger Quantization for Energy Efficient Spiking Neural Networks, ASP-DAC 2024, Nominated for Best Paper Award☆14Updated last year
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆39Updated 2 years ago
- FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations☆94Updated 3 years ago
- ☆47Updated 3 years ago
- ☆19Updated 4 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆46Updated last year
- ☆101Updated last year
- ☆41Updated 11 months ago
- MNSIM_Python_v1.0. The former circuits-level version link: https://github.com/Zhu-Zhenhua/MNSIM_V1.1☆34Updated last year
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago
- Neural Network Quantization With Fractional Bit-widths☆12Updated 4 years ago
- MICRO22 artifact evaluation for Sparseloop☆44Updated 2 years ago
- ☆19Updated 3 years ago
- ☆71Updated 5 years ago