DeepWok / maseLinks
Machine-Learning Accelerator System Exploration Tools
☆166Updated last week
Alternatives and similar repositories for mase
Users that are interested in mase are comparing it to the libraries listed below
Sorting:
- A survey on Hardware Accelerated LLMs☆52Updated 4 months ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆53Updated 2 months ago
- ☆91Updated last year
- CHARM: Composing Heterogeneous Accelerators on Heterogeneous SoC Architecture☆143Updated this week
- HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators☆150Updated 2 months ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆53Updated last month
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆135Updated 2 years ago
- PyTorch model to RTL flow for low latency inference☆126Updated last year
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆118Updated 3 months ago
- High-Performance Sparse Linear Algebra on HBM-Equipped FPGAs Using HLS☆91Updated 8 months ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆32Updated this week
- An Open Workflow to Build Custom SoCs and run Deep Models at the Edge☆79Updated 2 weeks ago
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆79Updated 10 months ago
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆53Updated 2 months ago
- AutoSA: Polyhedral-Based Systolic Array Compiler☆221Updated 2 years ago
- A reading list for SRAM-based Compute-In-Memory (CIM) research.☆65Updated 3 months ago
- ☆57Updated last month
- FPGA based Vision Transformer accelerator (Harvard CS205)☆119Updated 3 months ago
- RTL implementation of Flex-DPE.☆100Updated 5 years ago
- ☆53Updated 2 months ago
- NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions☆32Updated 2 months ago
- ☆47Updated last month
- ☆41Updated 5 months ago
- Allo: A Programming Model for Composable Accelerator Design☆235Updated last week
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆93Updated 9 months ago
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆56Updated 4 months ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆43Updated last year
- STONNE: A Simulation Tool for Neural Networks Engines☆132Updated last year
- A co-design architecture on sparse attention☆52Updated 3 years ago
- Research and Materials on Hardware implementation of Transformer Model☆264Updated 3 months ago