dubcyfor3 / ProsperityLinks
The official implementation of HPCA 2025 paper, Prosperity: Accelerating Spiking Neural Networks via Product Sparsity
☆29Updated 4 months ago
Alternatives and similar repositories for Prosperity
Users that are interested in Prosperity are comparing it to the libraries listed below
Sorting:
- LoAS: Fully Temporal-Parallel Dataflow for Dual-Sparse Spiking Neural Networks, MICRO 2024.☆11Updated 2 months ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆27Updated last year
- ☆57Updated last month
- [ASP-DAC 2025] "NeuronQuant: Accurate and Efficient Post-Training Quantization for Spiking Neural Networks" Official Implementation☆11Updated 3 months ago
- ☆41Updated 5 months ago
- ☆45Updated 3 years ago
- [ASPLOS 2024] CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators☆37Updated last year
- ☆34Updated 4 years ago
- bitfusion verilog implementation☆8Updated 3 years ago
- Simulator for LLM inference on an abstract 3D AIMC-based accelerator☆14Updated last month
- ☆98Updated last year
- SATA_Sim is an energy estimation framework for Backpropagation-Through-Time (BPTT) based Spiking Neural Networks (SNNs) training and infe…☆27Updated 8 months ago
- ☆27Updated this week
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆53Updated last month
- ☆15Updated last year
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆93Updated 9 months ago
- I will share some useful or interesting papers about neuromorphic processor☆25Updated 4 months ago
- ViTALiTy (HPCA'23) Code Repository☆22Updated 2 years ago
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated 10 months ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆39Updated 2 years ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆107Updated last year
- ☆18Updated 2 years ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (on-chip training chip focused)☆48Updated 4 years ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆32Updated last week
- HW accelerator mapping optimization framework for in-memory computing☆24Updated this week
- NeuroSync: A Scalable and Accurate Brain Simulation System using Safe and Efficient Speculation (HPCA 2022)☆12Updated 2 years ago
- Here are some implementations of basic hardware units in RTL language (verilog for now), which can be used for area/power evaluation and …☆11Updated last year
- ☆27Updated 2 months ago
- A DAG processor and compiler for a tree-based spatial datapath.☆13Updated 2 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆44Updated last year