jha-lab / acceltranLinks
[TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers
☆51Updated last year
Alternatives and similar repositories for acceltran
Users that are interested in acceltran are comparing it to the libraries listed below
Sorting:
- ☆48Updated 4 years ago
- A co-design architecture on sparse attention☆51Updated 4 years ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆32Updated this week
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆62Updated 3 weeks ago
- Open-source of MSD framework☆16Updated 2 years ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆116Updated 2 years ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆40Updated 2 years ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆63Updated 6 months ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆58Updated 2 months ago
- An FPGA Accelerator for Transformer Inference☆88Updated 3 years ago
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆80Updated 3 years ago
- RTL implementation of Flex-DPE.☆110Updated 5 years ago
- Model LLM inference on single-core dataflow accelerators☆14Updated last month
- ☆44Updated 2 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆102Updated last year
- ☆35Updated 5 years ago
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated last year
- MICRO22 artifact evaluation for Sparseloop☆44Updated 3 years ago
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆57Updated 5 months ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆29Updated last year
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆87Updated 7 months ago
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆84Updated last year
- ☆50Updated last month
- ☆28Updated 5 months ago
- ☆43Updated last month
- ☆31Updated 2 weeks ago
- ☆56Updated last year
- Implementation of Microscaling data formats in SystemVerilog.☆23Updated 2 months ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆126Updated last year
- Collection of kernel accelerators optimised for LLM execution☆21Updated 5 months ago