bonanyan / attentionlegoView external linksLinks
Attentionlego
☆12Jan 24, 2024Updated 2 years ago
Alternatives and similar repositories for attentionlego
Users that are interested in attentionlego are comparing it to the libraries listed below
Sorting:
- Accelerate multihead attention transformer model using HLS for FPGA☆11Dec 7, 2023Updated 2 years ago
- ☆10Sep 26, 2024Updated last year
- [DATE 2025] Official implementation and dataset of AIrchitect v2: Learning the Hardware Accelerator Design Space through Unified Represen…☆19Jan 17, 2025Updated last year
- ☆34Jun 7, 2021Updated 4 years ago
- A collection of research papers on SRAM-based compute-in-memory architectures.☆30Nov 2, 2023Updated 2 years ago
- [ASPLOS 2024] CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators☆45May 25, 2024Updated last year
- ☆74Feb 12, 2025Updated last year
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Feb 3, 2025Updated last year
- ☆18May 1, 2024Updated last year
- ☆59Feb 29, 2024Updated last year
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆35Updated this week
- [TCAD'24] This repository contains the source code for the paper "FireFly v2: Advancing Hardware Support for High-Performance Spiking Neu…☆23May 9, 2024Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆127Jun 27, 2023Updated 2 years ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆132May 10, 2024Updated last year
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆54Mar 24, 2024Updated last year
- Ratatoskr NoC Simulator☆29Apr 13, 2021Updated 4 years ago
- Collection of kernel accelerators optimised for LLM execution☆26Nov 19, 2025Updated 2 months ago
- CamJ: an energy modeling and system-level exploration framework for in-sensor visual computing☆23Sep 29, 2023Updated 2 years ago
- This is a series of quick start guide of Vitis HLS tool in Chinese. It explains the basic concepts and the most important optimize techni…☆25Nov 9, 2022Updated 3 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆122Aug 27, 2024Updated last year
- ☆27Jan 22, 2023Updated 3 years ago
- This is just for Takk_Zynq_Labs test.☆27Jan 14, 2022Updated 4 years ago
- Research and Materials on Hardware implementation of Transformer Model☆298Feb 28, 2025Updated 11 months ago
- ☆32Mar 31, 2025Updated 10 months ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆73Dec 20, 2023Updated 2 years ago
- A reading list for SRAM-based Compute-In-Memory (CIM) research.☆116Oct 29, 2025Updated 3 months ago
- A general framework for optimizing DNN dataflow on systolic array☆38Jan 2, 2021Updated 5 years ago
- ☆40Apr 28, 2019Updated 6 years ago
- Artifact material for [HPCA 2025] #2108 "UniNDP: A Unified Compilation and Simulation Tool for Near DRAM Processing Architectures"☆53Sep 1, 2025Updated 5 months ago
- sram/rram/mram.. compiler☆46Sep 11, 2023Updated 2 years ago
- PUMA Compiler☆30Oct 13, 2025Updated 4 months ago
- ☆40Jun 3, 2023Updated 2 years ago
- Unified Sparse Library Wrapper Based on cuSPARSE☆12May 24, 2022Updated 3 years ago
- Arche is a Greek word with primary senses "beginning". The repository defines a framework for technology mapping of emerging technologies…☆11May 15, 2020Updated 5 years ago
- 大三上做的本科毕设,包含BNN的替代梯度训练,verilog电路实现,完成180nm工艺流片。☆21Jun 30, 2025Updated 7 months ago
- 基于FPGA的FFT算法并行优化☆12Mar 7, 2024Updated last year
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆125Jan 20, 2025Updated last year
- An FPGA Accelerator for Transformer Inference☆93Apr 29, 2022Updated 3 years ago
- DeepGate3 for ICCAD2024☆13May 26, 2025Updated 8 months ago