I-Doctor / RTL_library_of_basic_hardware_units
Here are some implementations of basic hardware units in RTL language (verilog for now), which can be used for area/power evaluation and support the hardware design tradeoff.
☆10Updated last year
Alternatives and similar repositories for RTL_library_of_basic_hardware_units:
Users that are interested in RTL_library_of_basic_hardware_units are comparing it to the libraries listed below
- MICRO22 artifact evaluation for Sparseloop☆41Updated 2 years ago
- ☆18Updated last year
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆46Updated this week
- A co-design architecture on sparse attention☆48Updated 3 years ago
- ☆37Updated 6 months ago
- ☆25Updated last month
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆47Updated this week
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆67Updated 3 years ago
- Open-source of MSD framework☆16Updated last year
- A general framework for optimizing DNN dataflow on systolic array☆33Updated 4 years ago
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆43Updated 3 months ago
- ☆31Updated 4 years ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆35Updated last year
- ☆12Updated last year
- ☆42Updated 3 years ago
- ☆16Updated last year
- A comprehensive tool that allows for system-level performance estimation of chiplet-based In-Memory computing (IMC) architectures.☆17Updated 6 months ago
- ☆31Updated 3 years ago
- A bit-level sparsity-awared multiply-accumulate process element.☆12Updated 6 months ago
- A Unified Framework for Training, Mapping and Simulation of ReRAM-Based Convolutional Neural Network Acceleration☆33Updated 2 years ago
- HW accelerator mapping optimization framework for in-memory computing☆21Updated this week
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆24Updated 10 months ago
- Sparse CNN Accelerator targeting Intel FPGA☆11Updated 3 years ago
- Implementation of Input Stationary, Weight Stationary and Output Stationary dataflow for given neural network on a tiled architecture☆9Updated 4 years ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆21Updated 3 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆33Updated last year
- Code for paper "FuSeConv Fully Separable Convolutions for Fast Inference on Systolic Arrays" published at DATE 2021☆14Updated 3 years ago
- MNSIM_Python_v1.0. The former circuits-level version link: https://github.com/Zhu-Zhenhua/MNSIM_V1.1☆34Updated last year
- tpu-systolic-array-weight-stationary☆20Updated 3 years ago
- ☆11Updated 6 months ago