ok858ok / CP-ViT
Code for "CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction" on CIFAR-10/100.
☆14Updated 3 years ago
Alternatives and similar repositories for CP-ViT:
Users that are interested in CP-ViT are comparing it to the libraries listed below
- Vision Transformer Pruning☆54Updated 3 years ago
- ☆25Updated last month
- Open-source of MSD framework☆16Updated last year
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28Updated 2 years ago
- DeiT implementation for Q-ViT☆24Updated 2 years ago
- ☆18Updated last year
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆33Updated last year
- ViTALiTy (HPCA'23) Code Repository☆21Updated last year
- A co-design architecture on sparse attention☆49Updated 3 years ago
- ☆21Updated this week
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆15Updated 3 years ago
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆12Updated 3 years ago
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆22Updated this week
- Neural Network Quantization With Fractional Bit-widths☆12Updated 3 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆81Updated 4 months ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆102Updated last year
- The final project repository for 2022 Spring COMS6998-009 Deep Learning System Performance in Columbia University.☆7Updated 2 years ago
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆119Updated last year
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆24Updated 10 months ago
- An HLS based winograd systolic CNN accelerator☆49Updated 3 years ago
- ☆12Updated 2 years ago
- ☆38Updated last year
- ☆18Updated 2 years ago
- ☆42Updated 3 years ago
- mixed-precision quantization for LLMs☆18Updated last year
- ☆18Updated 3 years ago
- ☆88Updated last year
- ☆42Updated 4 months ago
- MNSIM_Python_v1.0. The former circuits-level version link: https://github.com/Zhu-Zhenhua/MNSIM_V1.1☆34Updated last year
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago