Inference-and-Optimization / High-Level-Synthesis-Study-Notes
Vivado HLS study notes, courses, documents.
☆12Updated 5 years ago
Alternatives and similar repositories for High-Level-Synthesis-Study-Notes:
Users that are interested in High-Level-Synthesis-Study-Notes are comparing it to the libraries listed below
- Accelerate multihead attention transformer model using HLS for FPGA☆11Updated last year
- An open source Verilog Based LeNet-1 Parallel CNNs Accelerator for FPGAs in Vivado 2017☆15Updated 5 years ago
- ☆18Updated 2 years ago
- Open-source of MSD framework☆16Updated last year
- ☆35Updated 3 weeks ago
- ☆13Updated last year
- ☆26Updated 3 weeks ago
- Public repostory for the DAC 2021 paper "Scaling up HBM Efficiency of Top-K SpMV forApproximate Embedding Similarity on FPGAs"☆14Updated 3 years ago
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆74Updated 3 years ago
- ☆12Updated last year
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆38Updated 2 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆40Updated last year
- eyeriss-chisel3☆40Updated 2 years ago
- [ASAP 2020; FPGA 2020] Hardware architecture to accelerate GNNs (common IP modules for minibatch training and full batch inference)☆41Updated 4 years ago
- A Spatial Accelerator Generation Framework for Tensor Algebra.☆56Updated 3 years ago
- A co-design architecture on sparse attention☆52Updated 3 years ago
- An HLS based winograd systolic CNN accelerator☆50Updated 3 years ago
- ☆26Updated 5 months ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆51Updated 2 months ago
- 关于移植模型至gemmini的文档☆24Updated 2 years ago
- ☆10Updated 2 years ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆31Updated this week
- A collection of tutorials for the fpgaConvNet framework.☆39Updated 7 months ago
- [TECS'23] A project on the co-design of Accelerators and CNNs.☆20Updated 2 years ago
- ☆15Updated 10 months ago
- High-Performance Sparse Linear Algebra on HBM-Equipped FPGAs Using HLS☆90Updated 6 months ago
- A Unified Framework for Training, Mapping and Simulation of ReRAM-Based Convolutional Neural Network Acceleration☆34Updated 2 years ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆26Updated last year
- Designs for finalist teams of the DAC System Design Contest☆37Updated 4 years ago
- ☆33Updated 6 years ago