sjtu-zhao-lab / SALO
An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences
☆25Updated last year
Alternatives and similar repositories for SALO:
Users that are interested in SALO are comparing it to the libraries listed below
- A co-design architecture on sparse attention☆50Updated 3 years ago
- ☆43Updated 3 years ago
- ViTALiTy (HPCA'23) Code Repository☆21Updated 2 years ago
- [FPGA 2024]FPGA Accelerator for Imbalanced SpMV using HLS☆10Updated last month
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆36Updated last year
- MICRO22 artifact evaluation for Sparseloop☆42Updated 2 years ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆37Updated 2 years ago
- Open-source Framework for HPCA2024 paper: Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators☆72Updated this week
- ☆22Updated this week
- ☆21Updated 2 months ago
- PALM: A Efficient Performance Simulator for Tiled Accelerators with Large-scale Model Training☆15Updated 9 months ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆48Updated 3 weeks ago
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆46Updated 5 months ago
- Open-source of MSD framework☆16Updated last year
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆29Updated 7 months ago
- The framework for the paper "Inter-layer Scheduling Space Definition and Exploration for Tiled Accelerators" in ISCA 2023.☆61Updated 2 weeks ago
- ☆19Updated last year
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆123Updated last year
- ☆25Updated 2 months ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆83Updated 6 months ago
- [ASPLOS 2024] CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators☆28Updated 9 months ago
- ☆23Updated 7 months ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆47Updated this week
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆48Updated last week
- A bit-level sparsity-awared multiply-accumulate process element.☆13Updated 8 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆13Updated 8 months ago
- ☆32Updated 4 years ago
- ☆39Updated 8 months ago
- ☆91Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆103Updated last year