shihuihong214 / P2-ViTLinks
☆10Updated last year
Alternatives and similar repositories for P2-ViT
Users that are interested in P2-ViT are comparing it to the libraries listed below
Sorting:
- ☆10Updated last year
- ☆31Updated 9 months ago
- ☆46Updated 2 years ago
- ViTALiTy (HPCA'23) Code Repository☆23Updated 2 years ago
- A DAG processor and compiler for a tree-based spatial datapath.☆15Updated 3 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆55Updated 2 years ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆132Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆125Updated 2 years ago
- PolyLUT is the first quantized neural network training methodology that maps a neuron to a LUT while using multivariate polynomial functi…☆55Updated last year
- Open-source of MSD framework☆16Updated 2 years ago
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆19Updated 6 years ago
- Implementation of Microscaling data formats in SystemVerilog.☆28Updated 6 months ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆118Updated last year
- ☆21Updated 3 years ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆35Updated this week
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Updated 4 years ago
- An FPGA Accelerator for Transformer Inference☆92Updated 3 years ago
- ☆18Updated 2 years ago
- A framework for fast exploration of the depth-first scheduling space for DNN accelerators☆43Updated 2 years ago
- (Verilog) A simple convolution layer implementation with systolic array structure☆13Updated 3 years ago
- MaxEVA: Maximizing the Efficiency of Matrix Multiplication on Versal AI Engine (accepted as full paper at FPT'23)☆21Updated last year
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆31Updated last year
- FPGA based Vision Transformer accelerator (Harvard CS205)☆142Updated 10 months ago
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆112Updated 11 months ago
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28Updated 3 years ago
- A bit-level sparsity-awared multiply-accumulate process element.☆18Updated last year
- A collection of tutorials for the fpgaConvNet framework.☆47Updated last year
- ☆19Updated 4 years ago
- ☆26Updated 3 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆25Updated 3 years ago