mean9park / BitFusion-verilogLinks
bitfusion verilog implementation
☆8Updated 3 years ago
Alternatives and similar repositories for BitFusion-verilog
Users that are interested in BitFusion-verilog are comparing it to the libraries listed below
Sorting:
- Open-source of MSD framework☆16Updated last year
- A bit-level sparsity-awared multiply-accumulate process element.☆16Updated 10 months ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆39Updated 2 years ago
- ☆27Updated 2 months ago
- Model LLM inference on single-core dataflow accelerators☆10Updated 3 months ago
- A co-design architecture on sparse attention☆52Updated 3 years ago
- ☆18Updated 2 years ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆27Updated last year
- Sparse CNN Accelerator targeting Intel FPGA☆11Updated 3 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆44Updated last year
- ☆15Updated last year
- LoAS: Fully Temporal-Parallel Dataflow for Dual-Sparse Spiking Neural Networks, MICRO 2024.☆11Updated 2 months ago
- C++ code for HLS FPGA implementation of transformer☆17Updated 8 months ago
- FPGA implement of 8x8 weight stationary systolic array DNN accelerator☆11Updated 4 years ago
- Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.☆53Updated last month
- ☆44Updated 2 years ago
- Here are some implementations of basic hardware units in RTL language (verilog for now), which can be used for area/power evaluation and …☆11Updated last year
- Accelerate multihead attention transformer model using HLS for FPGA☆11Updated last year
- (Verilog) A simple convolution layer implementation with systolic array structure☆13Updated 3 years ago
- ☆16Updated 2 years ago
- ☆41Updated 5 months ago
- ☆27Updated this week
- FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations☆94Updated 3 years ago
- Benchmark framework of compute-in-memory based accelerators for deep neural network (on-chip training chip focused)☆48Updated 4 years ago
- ☆17Updated 8 months ago
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆12Updated 3 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆23Updated last year
- ☆17Updated 4 years ago
- ☆45Updated 3 years ago
- Efficient FPGA-Based Accelerator for Convolutional Neural Networks☆14Updated 10 months ago