ebby-s / MX-for-FPGALinks
Implementation of Microscaling data formats in SystemVerilog.
☆24Updated 3 months ago
Alternatives and similar repositories for MX-for-FPGA
Users that are interested in MX-for-FPGA are comparing it to the libraries listed below
Sorting:
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆59Updated 3 months ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆51Updated last year
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆64Updated 2 weeks ago
- ☆31Updated last week
- ☆51Updated 2 months ago
- Open-source of MSD framework☆16Updated 2 years ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆40Updated 2 years ago
- RTL implementation of Flex-DPE.☆112Updated 5 years ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆66Updated last week
- A co-design architecture on sparse attention☆52Updated 4 years ago
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆84Updated last year
- ☆18Updated 2 years ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆23Updated last year
- ☆48Updated 4 years ago
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆80Updated 3 years ago
- ☆72Updated 2 years ago
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated last year
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆95Updated 8 months ago
- ☆30Updated 6 months ago
- Model LLM inference on single-core dataflow accelerators☆14Updated last month
- HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators☆160Updated last month
- FlexASR: A Reconfigurable Hardware Accelerator for Attention-based Seq-to-Seq Networks☆48Updated 7 months ago
- ☆17Updated 4 months ago
- PolyLUT is the first quantized neural network training methodology that maps a neuron to a LUT while using multivariate polynomial functi…☆52Updated last year
- A collection of tutorials for the fpgaConvNet framework.☆45Updated last year
- ☆45Updated 2 years ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆33Updated this week
- [TRETS 2025][FPGA 2024] FPGA Accelerator for Imbalanced SpMV using HLS☆14Updated last month
- ☆35Updated 5 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆26Updated last year