ebby-s / MX-for-FPGAView external linksLinks
Implementation of Microscaling data formats in SystemVerilog.
☆29Jul 6, 2025Updated 7 months ago
Alternatives and similar repositories for MX-for-FPGA
Users that are interested in MX-for-FPGA are comparing it to the libraries listed below
Sorting:
- NeuraLUT-Assemble☆47Aug 20, 2025Updated 5 months ago
- ☆22Sep 27, 2022Updated 3 years ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆24Oct 25, 2023Updated 2 years ago
- PolyLUT is the first quantized neural network training methodology that maps a neuron to a LUT while using multivariate polynomial functi…☆55Feb 9, 2024Updated 2 years ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- The official repository of Quamba1 [ICLR 2025] & Quamba2 [ICML 2025]☆67Jun 19, 2025Updated 7 months ago
- Machine-Learning Accelerator System Exploration Tools☆198Updated this week
- An API built to enable one-line-of-code access to accelerated open-source and custom AI models.☆67Mar 29, 2024Updated last year
- A multicore microprocessor test harness for measuring interference☆14Apr 16, 2020Updated 5 years ago
- ☆20Dec 5, 2024Updated last year
- ☆35Dec 22, 2025Updated last month
- ☆20Feb 12, 2025Updated last year
- ☆16Apr 13, 2018Updated 7 years ago
- ☆113Nov 17, 2023Updated 2 years ago
- ☆20Mar 6, 2022Updated 3 years ago
- ptq4vm official repository☆25Apr 7, 2025Updated 10 months ago
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆19Oct 6, 2019Updated 6 years ago
- ViTALiTy (HPCA'23) Code Repository☆23Mar 13, 2023Updated 2 years ago
- FireQ: Fast INT4-FP8 Kernel and RoPE-aware Quantization for LLM Inference Acceleration☆20Jun 27, 2025Updated 7 months ago
- Introductory examples for using PYNQ with Alveo☆52Mar 14, 2023Updated 2 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆29Feb 23, 2024Updated last year
- Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models☆25Sep 27, 2023Updated 2 years ago
- GoldenEye is a functional simulator with fault injection capabilities for common and emerging numerical formats, implemented for the PyTo…☆27Oct 22, 2024Updated last year
- ☆25May 9, 2019Updated 6 years ago
- ☆25Dec 11, 2021Updated 4 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- ☆65May 6, 2020Updated 5 years ago
- A tool to generate optimized hardware files for univariate functions.☆15Sep 5, 2024Updated last year
- [ICML 2024] Sparse Model Inversion: Efficient Inversion of Vision Transformers with Less Hallucination☆13Apr 29, 2025Updated 9 months ago
- DASS HLS Compiler☆29Oct 4, 2023Updated 2 years ago
- Performance and resource models for fpgaConvNet: a Streaming-Architecture-based CNN Accelerator.☆32Nov 7, 2024Updated last year
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆84Nov 7, 2021Updated 4 years ago
- ☆10Sep 7, 2023Updated 2 years ago
- SAMO: Streaming Architecture Mapping Optimisation☆34Oct 4, 2023Updated 2 years ago
- Live demo of hls4ml on embedded platforms such as the Pynq-Z2☆12Aug 23, 2024Updated last year
- ☆140Jul 19, 2025Updated 6 months ago
- A formally verified high-level synthesis tool based on CompCert and written in Coq.☆97Jan 29, 2026Updated 2 weeks ago
- CHARM: Composing Heterogeneous Accelerators on Heterogeneous SoC Architecture☆163Updated this week
- flash attention 优化日志☆25Jun 4, 2025Updated 8 months ago