zjysteven / bitslice_sparsityLinks
Codes for our paper "Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment" [NeurIPS'19 EMC2 workshop].
☆11Updated 4 years ago
Alternatives and similar repositories for bitslice_sparsity
Users that are interested in bitslice_sparsity are comparing it to the libraries listed below
Sorting:
- ☆18Updated 2 years ago
- Open-source of MSD framework☆16Updated last year
- FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations☆94Updated 3 years ago
- ☆27Updated 3 months ago
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated last year
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆25Updated 2 years ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆39Updated 2 years ago
- HLS implemented systolic array structure☆41Updated 7 years ago
- Sparse CNN Accelerator targeting Intel FPGA☆12Updated 3 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆49Updated last year
- ☆33Updated 6 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆23Updated last year
- An FPGA Accelerator for Transformer Inference☆85Updated 3 years ago
- An HLS based winograd systolic CNN accelerator☆53Updated 3 years ago
- [TECS'23] A project on the co-design of Accelerators and CNNs.☆20Updated 2 years ago
- ☆71Updated 5 years ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)☆22Updated 4 years ago
- ☆21Updated 2 years ago
- Implementation of Microscaling data formats in SystemVerilog.☆21Updated last week
- ☆44Updated 2 years ago
- Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators☆11Updated 6 years ago
- ☆12Updated last year
- ☆35Updated 5 years ago
- ☆71Updated 2 years ago
- ☆41Updated last year
- ☆49Updated 3 years ago
- ☆24Updated 2 years ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆54Updated last week
- Designs for finalist teams of the DAC System Design Contest☆37Updated 5 years ago
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆13Updated 3 years ago