FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.
☆139Jan 20, 2025Updated last year
Alternatives and similar repositories for HG-PIPE
Users that are interested in HG-PIPE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- You can run it on pynq z1. The repository contains the relevant Verilog code, Vivado configuration and C code for sdk testing. The size o…☆241Mar 24, 2024Updated 2 years ago
- ☆15Aug 10, 2023Updated 2 years ago
- FPGA based Vision Transformer accelerator (Harvard CS205)☆152Feb 11, 2025Updated last year
- ☆14Mar 22, 2024Updated 2 years ago
- (Not actively updating)Vision Transformer Accelerator implemented in Vivado HLS for Xilinx FPGAs.☆20Dec 29, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- An FPGA Accelerator for Transformer Inference☆93Apr 29, 2022Updated 3 years ago
- Accelerate multihead attention transformer model using HLS for FPGA☆12Dec 7, 2023Updated 2 years ago
- ☆14Jun 22, 2022Updated 3 years ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆129Jun 27, 2023Updated 2 years ago
- This is my hobby project with System Verilog to accelerate LeViT Network which contain CNN and Attention layer.☆36Aug 13, 2024Updated last year
- Collection of kernel accelerators optimised for LLM execution☆30Feb 26, 2026Updated last month
- Open-source of MSD framework☆16Sep 12, 2023Updated 2 years ago
- c++ version of ViT☆12Nov 13, 2022Updated 3 years ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆32Mar 7, 2024Updated 2 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆47Aug 23, 2021Updated 4 years ago
- Research and Materials on Hardware implementation of Transformer Model☆303Feb 28, 2025Updated last year
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆60Nov 22, 2023Updated 2 years ago
- ☆122Jan 11, 2024Updated 2 years ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆134May 10, 2024Updated last year
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆104Jun 2, 2024Updated last year
- FPGA implement of 8x8 weight stationary systolic array DNN accelerator☆17Feb 27, 2021Updated 5 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆128Aug 27, 2024Updated last year
- [TCAD'24] This repository contains the source code for the paper "FireFly v2: Advancing Hardware Support for High-Performance Spiking Neu…☆25May 9, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆69Apr 22, 2025Updated 11 months ago
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆36Mar 12, 2026Updated last month
- Training and Implementation of a CNN for image classification with binary weights and activations on FPGA with HLS tools☆53May 29, 2018Updated 7 years ago
- A co-design architecture on sparse attention☆55Aug 23, 2021Updated 4 years ago
- An open source Verilog Based LeNet-1 Parallel CNNs Accelerator for FPGAs in Vivado 2017☆25May 20, 2019Updated 6 years ago
- C++ code for HLS FPGA implementation of transformer☆23Sep 11, 2024Updated last year
- ☆10Oct 8, 2021Updated 4 years ago
- ☆11Jun 4, 2024Updated last year
- [DATE 2025] Official implementation and dataset of AIrchitect v2: Learning the Hardware Accelerator Design Space through Unified Represen…☆19Jan 17, 2025Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [ASP-DAC 2025] "NeuronQuant: Accurate and Efficient Post-Training Quantization for Spiking Neural Networks" Official Implementation☆19Mar 6, 2025Updated last year
- TMMA: A Tiled Matrix Multiplication Accelerator for Self-Attention Projections in Transformer Models, optimized for edge deployment on Xi…☆31Apr 7, 2026Updated last week
- Attentionlego☆13Jan 24, 2024Updated 2 years ago
- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference☆97Apr 26, 2025Updated 11 months ago
- ☆11Apr 15, 2024Updated 2 years ago
- a student trainning project for HLS and transformer☆11Oct 19, 2022Updated 3 years ago
- CNN simd based accelerator using Vitis HLS☆11Jul 15, 2022Updated 3 years ago