fffasttime / MRFILinks
☆12Updated 4 months ago
Alternatives and similar repositories for MRFI
Users that are interested in MRFI are comparing it to the libraries listed below
Sorting:
- ☆18Updated last year
- ☆18Updated 2 years ago
- Code of "Eva-CiM: A System-Level Performance and Energy Evaluation Framework for Computing-in-Memory Architectures", TCAD 2020☆11Updated 4 years ago
- ☆28Updated 4 months ago
- ☆54Updated last year
- A Unified Framework for Training, Mapping and Simulation of ReRAM-Based Convolutional Neural Network Acceleration☆34Updated 3 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆24Updated last year
- A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.☆80Updated 3 years ago
- Open-source of MSD framework☆16Updated last year
- An HLS based winograd systolic CNN accelerator☆53Updated 4 years ago
- FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations☆94Updated 3 years ago
- ☆68Updated 6 months ago
- A collection of research papers on SRAM-based compute-in-memory architectures.☆29Updated last year
- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching☆57Updated 4 months ago
- ☆17Updated 2 months ago
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated last year
- RTL implementation of Flex-DPE.☆109Updated 5 years ago
- ☆41Updated last year
- ☆44Updated 2 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆51Updated last year
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆73Updated 5 months ago
- An FPGA Accelerator for Transformer Inference☆88Updated 3 years ago
- C++ code for HLS FPGA implementation of transformer☆17Updated 11 months ago
- ☆35Updated 5 years ago
- Public repostory for the DAC 2021 paper "Scaling up HBM Efficiency of Top-K SpMV forApproximate Embedding Similarity on FPGAs"☆14Updated 3 years ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆39Updated 2 years ago
- ☆31Updated 4 years ago
- Automatic generation of FPGA-based learning accelerators for the neural network family☆67Updated 5 years ago
- ☆11Updated last year
- A co-design architecture on sparse attention☆51Updated 3 years ago