HLSTransform / submissionLinks
☆115Updated last year
Alternatives and similar repositories for submission
Users that are interested in submission are comparing it to the libraries listed below
Sorting:
- Machine-Learning Accelerator System Exploration Tools☆183Updated this week
- An Open Workflow to Build Custom SoCs and run Deep Models at the Edge☆100Updated this week
- Research and Materials on Hardware implementation of Transformer Model☆292Updated 9 months ago
- CHARM: Composing Heterogeneous Accelerators on Heterogeneous SoC Architecture☆163Updated this week
- HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators☆169Updated last month
- FPGA based Vision Transformer accelerator (Harvard CS205)☆139Updated 10 months ago
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆108Updated 10 months ago
- ☆59Updated 7 months ago
- PyTorch model to RTL flow for low latency inference☆131Updated last year
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆129Updated last year
- An FPGA Accelerator for Transformer Inference☆92Updated 3 years ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆64Updated 5 months ago
- A survey on Hardware Accelerated LLMs☆61Updated 11 months ago
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆92Updated last year
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆54Updated 2 years ago
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆71Updated last month
- High-Performance Sparse Linear Algebra on HBM-Equipped FPGAs Using HLS☆95Updated last year
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆35Updated this week
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆69Updated 2 months ago
- FREE TPU V3plus for FPGA is the free version of a commercial AI processor (EEP-TPU) for Deep Learning EDGE Inference☆168Updated 2 years ago
- An HLS based winograd systolic CNN accelerator☆54Updated 4 years ago
- NeuraLUT-Assemble☆46Updated 3 months ago
- Train and deploy LUT-based neural networks on FPGAs☆102Updated last year
- Deep Learning Accelerator Based on Eyeriss V2 Architecture with custom RISC-V extended instructions☆204Updated 5 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆116Updated last year
- AutoSA: Polyhedral-Based Systolic Array Compiler☆230Updated 3 years ago
- Library of approximate arithmetic circuits☆61Updated 3 years ago
- Implementation of Microscaling data formats in SystemVerilog.☆28Updated 5 months ago
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆171Updated 2 weeks ago
- ☆75Updated last week