DeepWok / maseLinks
Machine-Learning Accelerator System Exploration Tools
☆168Updated 3 weeks ago
Alternatives and similar repositories for mase
Users that are interested in mase are comparing it to the libraries listed below
Sorting:
- A survey on Hardware Accelerated LLMs☆55Updated 5 months ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆57Updated 3 months ago
- ☆94Updated last year
- CHARM: Composing Heterogeneous Accelerators on Heterogeneous SoC Architecture☆143Updated this week
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆54Updated this week
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆126Updated 4 months ago
- HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators☆151Updated last week
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆79Updated 11 months ago
- An Open Workflow to Build Custom SoCs and run Deep Models at the Edge☆81Updated last month
- High-Performance Sparse Linear Algebra on HBM-Equipped FPGAs Using HLS☆92Updated 8 months ago
- ☆59Updated 2 weeks ago
- Repository to host and maintain scale-sim-v2 code☆308Updated 2 months ago
- NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions☆35Updated 2 months ago
- ☆55Updated 3 months ago
- RTL implementation of Flex-DPE.☆103Updated 5 years ago
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆54Updated 3 months ago
- An FPGA Accelerator for Transformer Inference☆83Updated 3 years ago
- A reading list for SRAM-based Compute-In-Memory (CIM) research.☆68Updated 2 weeks ago
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆63Updated 5 months ago
- PolyLUT is the first quantized neural network training methodology that maps a neuron to a LUT while using multivariate polynomial functi…☆53Updated last year
- RapidStream TAPA compiles task-parallel HLS program into high-frequency FPGA accelerators.☆172Updated this week
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆32Updated this week
- A dataflow architecture for universal graph neural network inference via multi-queue streaming.☆73Updated 2 years ago
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆53Updated 2 months ago
- AutoSA: Polyhedral-Based Systolic Array Compiler☆221Updated 2 years ago
- PyTorch model to RTL flow for low latency inference☆127Updated last year
- Implementation of Microscaling data formats in SystemVerilog.☆20Updated 10 months ago
- Topics in Machine Learning Accelerator Design☆76Updated 2 years ago
- FPGA based Vision Transformer accelerator (Harvard CS205)☆124Updated 4 months ago
- A DSL for Systolic Arrays☆79Updated 6 years ago