GATECH-EIC / ViTCoD
[HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
☆102Updated last year
Alternatives and similar repositories for ViTCoD:
Users that are interested in ViTCoD are comparing it to the libraries listed below
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆82Updated 5 months ago
- A co-design architecture on sparse attention☆51Updated 3 years ago
- ☆43Updated 3 years ago
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆121Updated last year
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆101Updated 9 months ago
- ViTALiTy (HPCA'23) Code Repository☆21Updated last year
- An FPGA Accelerator for Transformer Inference☆76Updated 2 years ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆24Updated 11 months ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆28Updated 6 months ago
- ☆89Updated last year
- ☆50Updated last week
- ☆21Updated this week
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆34Updated last year
- Open-source of MSD framework☆16Updated last year
- [ASPLOS 2024] CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators☆27Updated 8 months ago
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆77Updated 6 months ago
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆73Updated 8 months ago
- RTL implementation of Flex-DPE.