BUAA-CI-LAB / Literatures-on-SRAM-based-CIM
A reading list for SRAM-based Compute-In-Memory (CIM) research.
☆60Updated 3 months ago
Alternatives and similar repositories for Literatures-on-SRAM-based-CIM:
Users that are interested in Literatures-on-SRAM-based-CIM are comparing it to the libraries listed below
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆49Updated last month
- A collection of research papers on SRAM-based compute-in-memory architectures.☆28Updated last year
- CHARM: Composing Heterogeneous Accelerators on Heterogeneous SoC Architecture☆140Updated this week
- RTL implementation of Flex-DPE.☆99Updated 5 years ago
- A Flexible and Energy Efficient Accelerator For Sparse Convolution Neural Network☆66Updated 2 months ago
- ☆108Updated 4 years ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆51Updated last week
- The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware …☆129Updated last year
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆67Updated last year
- An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).☆79Updated 9 months ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆51Updated last month
- An Open-Source Tool for CGRA Accelerators☆65Updated 3 weeks ago
- A SystemVerilog implementation of Row-Stationary dataflow and Hierarchical Mesh Network-on-Chip Architecture based on Eyeriss CNN Acceler…☆157Updated 5 years ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆65Updated 2 months ago
- ☆50Updated last year
- [ASAP 2020; FPGA 2020] Hardware architecture to accelerate GNNs (common IP modules for minibatch training and full batch inference)☆41Updated 4 years ago
- A co-design architecture on sparse attention☆52Updated 3 years ago
- ☆65Updated 2 months ago
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆77Updated 3 years ago
- A dataflow architecture for universal graph neural network inference via multi-queue streaming.☆72Updated 2 years ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆31Updated this week
- A comprehensive tool that allows for system-level performance estimation of chiplet-based In-Memory computing (IMC) architectures.☆21Updated 10 months ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆38Updated 2 years ago
- An integrated CGRA design framework☆88Updated last month
- RapidStream TAPA compiles task-parallel HLS program into high-frequency FPGA accelerators.☆168Updated this week
- MICRO22 artifact evaluation for Sparseloop☆43Updated 2 years ago
- An FPGA Accelerator for Transformer Inference☆81Updated 3 years ago
- The framework for the paper "Inter-layer Scheduling Space Definition and Exploration for Tiled Accelerators" in ISCA 2023.☆67Updated last month
- Open-source Framework for HPCA2024 paper: Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators☆80Updated last week
- High-Performance Sparse Linear Algebra on HBM-Equipped FPGAs Using HLS☆90Updated 7 months ago