fffasttime / MRFILinks
☆12Updated 5 months ago
Alternatives and similar repositories for MRFI
Users that are interested in MRFI are comparing it to the libraries listed below
Sorting:
- A collection of research papers on SRAM-based compute-in-memory architectures.☆29Updated last year
- ☆18Updated last year
- ☆71Updated 7 months ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆26Updated last year
- Collection of kernel accelerators optimised for LLM execution☆23Updated 5 months ago
- ☆29Updated 5 months ago
- An HLS based winograd systolic CNN accelerator☆54Updated 4 years ago
- A Unified Framework for Training, Mapping and Simulation of ReRAM-Based Convolutional Neural Network Acceleration☆35Updated 3 years ago
- ☆11Updated last year
- Accelerate multihead attention transformer model using HLS for FPGA☆12Updated last year
- [TVLSI 2025] ACiM Inference Simulation Framework in "ASiM: Modeling and Analyzing Inference Accuracy of SRAM-Based Analog CiM Circuits"☆19Updated 2 weeks ago
- Code of "Eva-CiM: A System-Level Performance and Energy Evaluation Framework for Computing-in-Memory Architectures", TCAD 2020☆11Updated 4 years ago
- ☆57Updated last year
- ☆18Updated 2 years ago
- ☆11Updated 2 years ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆73Updated 6 months ago
- Automatic generation of FPGA-based learning accelerators for the neural network family☆67Updated 5 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆51Updated last year
- Attentionlego☆12Updated last year
- C++ code for HLS FPGA implementation of transformer☆18Updated last year
- Open-source of MSD framework☆16Updated 2 years ago
- ☆35Updated 5 years ago
- An FPGA Accelerator for Transformer Inference☆90Updated 3 years ago
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆80Updated 3 years ago
- Accelergy is an energy estimation infrastructure for accelerator energy estimations☆149Updated 4 months ago
- ☆11Updated 5 months ago
- A co-design architecture on sparse attention☆52Updated 4 years ago
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆63Updated last month
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated last year
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆58Updated 2 months ago