☆53Aug 28, 2024Updated last year
Alternatives and similar repositories for SPViT
Users that are interested in SPViT are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆74Jul 13, 2022Updated 3 years ago
- Code for Learned Thresholds Token Merging and Pruning for Vision Transformers (LTMP). A technique to reduce the size of Vision Transforme…☆17Nov 24, 2024Updated last year
- (Verilog) A simple convolution layer implementation with systolic array structure☆13May 9, 2022Updated 3 years ago
- This is an official implementation for "A Unified Pruning Framework for Vision Transformers".☆20Aug 3, 2022Updated 3 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆88Dec 1, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆56Aug 18, 2022Updated 3 years ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Jul 7, 2022Updated 3 years ago
- This is an official implementation for "Making Vision Transformers Efficient from A Token Sparsification View".☆34Feb 17, 2025Updated last year
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆165Jul 14, 2022Updated 3 years ago
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆199Sep 3, 2023Updated 2 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆54Dec 1, 2023Updated 2 years ago
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆136Nov 30, 2022Updated 3 years ago
- ☆19Mar 21, 2023Updated 3 years ago
- [TPAMI 2024] This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.☆115Dec 30, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- SSR: Spatial Sequential Hybrid Architecture for Latency Throughput Tradeoff in Transformer Acceleration (Full Paper Accepted in FPGA'24)☆36Mar 12, 2026Updated last month
- LSTM neural network (verilog)☆15Dec 5, 2018Updated 7 years ago
- Fast and memory-efficient exact attention☆30Dec 2, 2024Updated last year
- A co-design architecture on sparse attention☆55Aug 23, 2021Updated 4 years ago
- Official implementation of "SViT: Revisiting Token Pruning for Object Detection and Instance Segmentation"☆36Dec 5, 2023Updated 2 years ago
- ☆12Nov 24, 2023Updated 2 years ago
- [AAAI 2023 Oral] Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training☆14Apr 19, 2023Updated 2 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆54Mar 24, 2024Updated 2 years ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆655Jul 11, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Deep learning accelerator for convolutional layer (convolution operation) and fully-connected layer(matrix-multiplication).☆20Nov 18, 2018Updated 7 years ago
- DeiT implementation for Q-ViT☆25Apr 21, 2025Updated 11 months ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆133May 10, 2024Updated last year
- [TECS'23] A project on the co-design of Accelerators and CNNs.☆21Dec 10, 2022Updated 3 years ago
- ☆20Apr 24, 2022Updated 3 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆103Jul 14, 2023Updated 2 years ago
- A bit-level sparsity-awared multiply-accumulate process element.☆18Jul 9, 2024Updated last year
- The top conferences on video retrieval libraries in recent years, synchronized with my blog.☆14Nov 27, 2021Updated 4 years ago
- Open-source of MSD framework☆16Sep 12, 2023Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆105May 22, 2023Updated 2 years ago
- FPGA based Vision Transformer accelerator (Harvard CS205)☆152Feb 11, 2025Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆129Jun 27, 2023Updated 2 years ago
- ViTALiTy (HPCA'23) Code Repository☆23Mar 13, 2023Updated 3 years ago
- The official implementation of our ECCV 2024 publication, PYRA (Parallel Yielding Re-Activation).☆22Dec 19, 2025Updated 3 months ago
- A Reconfigurable Accelerator for Deep Convolutional Neural Networks Implemented by Chisel3.☆29Jul 14, 2021Updated 4 years ago
- Searching a High Performance Feature Extractor for Text Recognition Network. TPAMI 2022☆13Nov 25, 2022Updated 3 years ago