DeepWok / maseLinks
Machine-Learning Accelerator System Exploration Tools
☆197Updated 2 weeks ago
Alternatives and similar repositories for mase
Users that are interested in mase are comparing it to the libraries listed below
Sorting:
- A survey on Hardware Accelerated LLMs☆61Updated last year
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆64Updated 7 months ago
- ☆119Updated 2 years ago
- NeuraLUT-Assemble☆47Updated 5 months ago
- PyTorch model to RTL flow for low latency inference☆131Updated last year
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆72Updated 4 months ago
- ☆84Updated last month
- CHARM: Composing Heterogeneous Accelerators on Heterogeneous SoC Architecture☆162Updated this week
- HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators☆181Updated 2 weeks ago
- An MLIR Complier for PyTorch/C/C++ Codes into HLS Dataflow Designs☆58Updated 6 months ago
- Allo Accelerator Design and Programming Framework (PLDI'24)☆343Updated this week
- ☆62Updated 10 months ago
- Implementation of Microscaling data formats in SystemVerilog.☆29Updated 7 months ago
- DNN Compiler for Heterogeneous SoCs☆60Updated this week
- AutoSA: Polyhedral-Based Systolic Array Compiler☆236Updated 3 years ago
- A DSL for Systolic Arrays☆83Updated 7 years ago
- High-Performance Sparse Linear Algebra on HBM-Equipped FPGAs Using HLS☆95Updated last year
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆92Updated last year
- FSA: Fusing FlashAttention within a Single Systolic Array☆86Updated 5 months ago
- Accelergy is an energy estimation infrastructure for accelerator energy estimations☆155Updated 8 months ago
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆184Updated last month
- A scalable High-Level Synthesis framework on MLIR☆288Updated last year
- RTL implementation of Flex-DPE.☆115Updated 5 years ago
- TAPA compiles task-parallel HLS program into high-performance FPGA accelerators. UCLA-maintained.☆180Updated 5 months ago
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆62Updated 3 months ago
- An MLIR dialect to enable the efficient acceleration of ML model on CGRAs.☆65Updated last year
- CGRA-Flow is an integrated framework for CGRA compilation, exploration, synthesis, and development.☆152Updated last week
- PolyLUT is the first quantized neural network training methodology that maps a neuron to a LUT while using multivariate polynomial functi…☆55Updated 2 years ago
- Train and deploy LUT-based neural networks on FPGAs☆106Updated last year
- ☆123Updated this week