Attentionlego
☆13Jan 24, 2024Updated 2 years ago
Alternatives and similar repositories for attentionlego
Users that are interested in attentionlego are comparing it to the libraries listed below
Sorting:
- Accelerate multihead attention transformer model using HLS for FPGA☆11Dec 7, 2023Updated 2 years ago
- ☆10Sep 26, 2024Updated last year
- [DATE 2025] Official implementation and dataset of AIrchitect v2: Learning the Hardware Accelerator Design Space through Unified Represen…☆19Jan 17, 2025Updated last year
- ☆34Jun 7, 2021Updated 4 years ago
- A collection of research papers on SRAM-based compute-in-memory architectures.☆30Nov 2, 2023Updated 2 years ago
- [ASPLOS 2024] CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators☆45May 25, 2024Updated last year
- ☆74Feb 12, 2025Updated last year
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Feb 3, 2025Updated last year
- ☆18May 1, 2024Updated last year
- ☆60Feb 29, 2024Updated 2 years ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆36Updated this week
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆128Jun 27, 2023Updated 2 years ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆134May 10, 2024Updated last year
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆54Mar 24, 2024Updated last year
- Ratatoskr NoC Simulator☆29Apr 13, 2021Updated 4 years ago
- Collection of kernel accelerators optimised for LLM execution☆27Feb 26, 2026Updated last week
- CamJ: an energy modeling and system-level exploration framework for in-sensor visual computing☆24Sep 29, 2023Updated 2 years ago
- This is a series of quick start guide of Vitis HLS tool in Chinese. It explains the basic concepts and the most important optimize techni…☆26Nov 9, 2022Updated 3 years ago
- ☆27Jan 22, 2023Updated 3 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆125Aug 27, 2024Updated last year
- This is just for Takk_Zynq_Labs test.☆27Jan 14, 2022Updated 4 years ago
- Research and Materials on Hardware implementation of Transformer Model☆298Feb 28, 2025Updated last year
- A reading list for SRAM-based Compute-In-Memory (CIM) research.☆117Oct 29, 2025Updated 4 months ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆73Dec 20, 2023Updated 2 years ago
- A general framework for optimizing DNN dataflow on systolic array☆39Jan 2, 2021Updated 5 years ago
- sram/rram/mram.. compiler☆47Sep 11, 2023Updated 2 years ago
- Artifact material for [HPCA 2025] #2108 "UniNDP: A Unified Compilation and Simulation Tool for Near DRAM Processing Architectures"☆54Sep 1, 2025Updated 6 months ago
- ☆40Jun 3, 2023Updated 2 years ago
- PUMA Compiler☆30Oct 13, 2025Updated 4 months ago
- ☆41Apr 28, 2019Updated 6 years ago
- Unified Sparse Library Wrapper Based on cuSPARSE☆12May 24, 2022Updated 3 years ago
- Arche is a Greek word with primary senses "beginning". The repository defines a framework for technology mapping of emerging technologies…☆11May 15, 2020Updated 5 years ago
- 大三上做的本科毕设,包含BNN的替代梯度训练,verilog电路实现,完成180nm工艺流片。☆21Jun 30, 2025Updated 8 months ago
- 基于FPGA的FFT算法并行优化☆13Mar 7, 2024Updated 2 years ago
- A FPGA Based CNN accelerator, following Google's TPU V1.☆172Jul 25, 2019Updated 6 years ago
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆129Jan 20, 2025Updated last year
- An FPGA Accelerator for Transformer Inference☆93Apr 29, 2022Updated 3 years ago
- The project includes SRAM In Memory Computing Accelerator with updates in design/circuits submitted previously in MPW7, by IITD researche…☆16Jan 6, 2023Updated 3 years ago
- ☆12Jun 22, 2023Updated 2 years ago