ok858ok / CP-ViT
Code for "CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction" on CIFAR-10/100.
☆14Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for CP-ViT
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆33Updated last year
- A co-design architecture on sparse attention☆44Updated 3 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆76Updated 2 months ago
- ☆41Updated 3 years ago
- Open-source of MSD framework☆14Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆97Updated last year
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28Updated 2 years ago
- MNSIM_Python_v1.0. The former circuits-level version link: https://github.com/Zhu-Zhenhua/MNSIM_V1.1☆34Updated 10 months ago
- ☆20Updated this week
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆11Updated 3 years ago
- ☆24Updated 8 months ago
- DeiT implementation for Q-ViT☆23Updated 2 years ago
- ☆18Updated 2 years ago
- Vision Transformer Pruning☆54Updated 2 years ago
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆10Updated 3 months ago
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆113Updated last year
- Neural Network Quantization With Fractional Bit-widths☆12Updated 3 years ago
- ☆18Updated last year
- ☆41Updated 2 months ago
- ViTALiTy (HPCA'23) Code Repository☆19Updated last year
- An FPGA Accelerator for Transformer Inference☆73Updated 2 years ago
- Open-source Framework for HPCA2024 paper: Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators☆56Updated 2 months ago
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆22Updated 2 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated last year
- A bit-level sparsity-awared multiply-accumulate process element.☆12Updated 4 months ago
- Eyeriss chip simulator☆33Updated 4 years ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆87Updated 6 months ago
- ☆37Updated last year
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆32Updated last year
- ☆19Updated 3 years ago