Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer
☆74Jul 13, 2022Updated 3 years ago
Alternatives and similar repositories for Evo-ViT
Users that are interested in Evo-ViT are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆53Aug 28, 2024Updated last year
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 — Carrying out CNN Channel Pruning in a White Box☆18Feb 15, 2022Updated 4 years ago
- Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)☆104May 3, 2024Updated last year
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Dec 1, 2023Updated 2 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆104Jul 14, 2023Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆13Sep 24, 2023Updated 2 years ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆652Jul 11, 2023Updated 2 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆55Dec 1, 2023Updated 2 years ago
- ☆19May 28, 2020Updated 5 years ago
- (AAAI 2023 Oral) Pytorch implementation of "CF-ViT: A General Coarse-to-Fine Method for Vision Transformer"☆107Jul 4, 2023Updated 2 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- [TPAMI 2024] This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.☆115Dec 30, 2023Updated 2 years ago
- [ECCV 2022] "PPT: token-Pruned Pose Transformer for monocular and multi-view human pose estimation"☆62Oct 27, 2022Updated 3 years ago
- Code for "Searching for Efficient Multi-Stage Vision Transformers"☆63Sep 1, 2021Updated 4 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- PyTorch implementation of NeurIPS 2020 paper "Pruning Filter in Filter".☆18Jan 4, 2021Updated 5 years ago
- Official codebase for our paper "Joslim: Joint Widths and Weights Optimization for Slimmable Neural Networks"☆12Jun 30, 2021Updated 4 years ago
- This is the official code for paper: Token Summarisation for Efficient Vision Transformers via Graph-based Token Propagation☆32Jan 15, 2024Updated 2 years ago
- Vision Transformer Pruning☆57Dec 9, 2021Updated 4 years ago
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆165Jul 14, 2022Updated 3 years ago
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers☆34Dec 30, 2024Updated last year
- This is an official implementation for "Making Vision Transformers Efficient from A Token Sparsification View".☆34Feb 17, 2025Updated last year
- [ICLR'22] This is an official implementation for "AS-MLP: An Axial Shifted MLP Architecture for Vision".☆127Oct 15, 2022Updated 3 years ago
- Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks☆34Jun 10, 2022Updated 3 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Official Implementation of DE-CondDETR and DELA-CondDETR in "Towards Data-Efficient Detection Transformers"☆45Aug 25, 2022Updated 3 years ago
- a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity☆44May 24, 2025Updated 10 months ago
- [AAAI 2023 Oral] Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training☆14Apr 19, 2023Updated 2 years ago
- ☆12Jul 7, 2021Updated 4 years ago
- (CVPR 2021, Oral) Dynamic Slimmable Network☆231Dec 31, 2021Updated 4 years ago
- LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference☆624Aug 27, 2022Updated 3 years ago
- ☆28Nov 29, 2022Updated 3 years ago
- ☆28Jan 9, 2025Updated last year
- [ICCV 2021] Official implementation of "Scalable Vision Transformers with Hierarchical Pooling"☆33Dec 30, 2021Updated 4 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆98Apr 27, 2022Updated 3 years ago
- code for AAAI21 paper "Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion“☆28Jan 7, 2021Updated 5 years ago
- RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is la…☆210Jun 17, 2023Updated 2 years ago
- Distilling the powerful segment anything models into lightweight ones for efficient segmentation.☆30Apr 27, 2023Updated 2 years ago
- Official implementation of "OptMerge: Unifying Multimodal LLM Capabilities and Modalities via Model Merging".☆47Oct 30, 2025Updated 4 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,174Jun 17, 2024Updated last year
- Accelerating T2t-ViT by 1.6-3.6x.☆259Nov 25, 2021Updated 4 years ago