ebby-s / MX-for-FPGALinks
Implementation of Microscaling data formats in SystemVerilog.
☆23Updated last month
Alternatives and similar repositories for MX-for-FPGA
Users that are interested in MX-for-FPGA are comparing it to the libraries listed below
Sorting:
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆57Updated last month
- ☆47Updated 3 weeks ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆60Updated 4 months ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆51Updated last year
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆39Updated 2 years ago
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆56Updated 4 months ago
- ☆49Updated 3 years ago
- Open-source of MSD framework☆16Updated last year
- A co-design architecture on sparse attention☆51Updated 3 years ago
- MICRO22 artifact evaluation for Sparseloop☆44Updated 3 years ago
- HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators☆155Updated 2 weeks ago
- ☆28Updated 4 months ago
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆81Updated last year
- RTL implementation of Flex-DPE.☆108Updated 5 years ago
- ☆18Updated 2 years ago
- ☆29Updated this week
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated last year
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆29Updated last year
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆32Updated this week
- ☆35Updated 5 years ago
- ☆17Updated 2 months ago
- PolyLUT is the first quantized neural network training methodology that maps a neuron to a LUT while using multivariate polynomial functi…☆54Updated last year
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆80Updated 3 years ago
- ☆72Updated 2 years ago
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆56Updated 3 months ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆100Updated 11 months ago
- Accelergy is an energy estimation infrastructure for accelerator energy estimations☆147Updated 2 months ago
- FlexASR: A Reconfigurable Hardware Accelerator for Attention-based Seq-to-Seq Networks☆46Updated 5 months ago
- FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations☆94Updated 3 years ago
- Model LLM inference on single-core dataflow accelerators☆12Updated 5 months ago