arc-research-lab / SSRLinks
SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)
☆32Updated this week
Alternatives and similar repositories for SSR
Users that are interested in SSR are comparing it to the libraries listed below
Sorting:
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆80Updated 11 months ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆49Updated last year
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆60Updated 4 months ago
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆54Updated 3 months ago
- CHARM: Composing Heterogeneous Accelerators on Heterogeneous SoC Architecture☆147Updated this week
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆73Updated 5 months ago
- Open-source of MSD framework☆16Updated last year
- A co-design architecture on sparse attention☆52Updated 3 years ago
- ☆27Updated 3 months ago
- ☆61Updated last month
- An HLS based winograd systolic CNN accelerator☆53Updated 3 years ago
- High-Performance Sparse Linear Algebra on HBM-Equipped FPGAs Using HLS☆93Updated 9 months ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆39Updated 2 years ago
- An FPGA Accelerator for Transformer Inference☆85Updated 3 years ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆54Updated last week
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆79Updated 3 years ago
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated last year
- Collection of kernel accelerators optimised for LLM execution☆19Updated 3 months ago
- ☆49Updated 3 years ago
- RTL implementation of Flex-DPE.☆106Updated 5 years ago
- ☆35Updated 5 years ago
- ☆41Updated last year
- HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators☆151Updated last week
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆122Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆108Updated 2 years ago
- Implementation of Microscaling data formats in SystemVerilog.☆21Updated last week
- A simulator for SK hynix AiM PIM architecture based on Ramulator 2.0☆27Updated 5 months ago
- A reading list for SRAM-based Compute-In-Memory (CIM) research.☆71Updated last month
- MICRO22 artifact evaluation for Sparseloop☆45Updated 2 years ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆72Updated 4 months ago