HLSTransform / submission
☆82Updated last year
Alternatives and similar repositories for submission:
Users that are interested in submission are comparing it to the libraries listed below
- Machine-Learning Accelerator System Exploration Tools☆145Updated this week
- ☆34Updated 2 months ago
- PyTorch model to RTL flow for low latency inference☆125Updated 11 months ago
- ☆19Updated last year
- Research and Materials on Hardware implementation of Transformer Model☆231Updated last week
- CHARM: Composing Heterogeneous Accelerators on Heterogeneous SoC Architecture☆128Updated last month
- Allo: A Programming Model for Composable Accelerator Design☆189Updated this week
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆121Updated last year
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆77Updated 6 months ago
- HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators☆129Updated this week
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆82Updated 5 months ago
- FPGA based Vision Transformer accelerator (Harvard CS205)☆103Updated last week
- A survey on Hardware Accelerated LLMs☆44Updated last month
- PolyLUT is the first quantized neural network training methodology that maps a neuron to a LUT while using multivariate polynomial functi…☆46Updated last year
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆32Updated last month
- An Open Workflow to Build Custom SoCs and run Deep Models at the Edge☆72Updated last week
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆44Updated last month
- A new LLM solution for RTL code generation, achieving state-of-the-art performance in non-commercial solutions and outperforming GPT-3.5.☆157Updated last week
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆34Updated last year
- An HLS based winograd systolic CNN accelerator☆50Updated 3 years ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆103Updated 9 months ago
- hardware design of universal NPU(CNN accelerator) for various convolution neural network☆93Updated 3 weeks ago
- Fork of upstream onnxruntime focused on supporting risc-v accelerators☆83Updated last year
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆28Updated 6 months ago
- FREE TPU V3plus for FPGA is the free version of a commercial AI processor (EEP-TPU) for Deep Learning EDGE Inference☆123Updated last year
- AutoSA: Polyhedral-Based Systolic Array Compiler☆210Updated 2 years ago
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆90Updated last week
- ☆38Updated last year
- ☆38Updated 4 months ago
- High-Performance Sparse Linear Algebra on HBM-Equipped FPGAs Using HLS☆84Updated 4 months ago