facebookresearch / ToMeLinks
A method to increase the speed and lower the memory footprint of existing vision transformers.
☆1,071Updated last year
Alternatives and similar repositories for ToMe
Users that are interested in ToMe are comparing it to the libraries listed below
Sorting:
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆999Updated last year
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆917Updated last year
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆860Updated 3 months ago
- Robust fine-tuning of zero-shot models☆721Updated 3 years ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆766Updated 3 years ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆736Updated 3 years ago
- CLIP-like model evaluation☆737Updated 3 weeks ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,128Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,155Updated last year
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆426Updated 2 years ago
- ☆652Updated 2 weeks ago
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]☆1,059Updated last year
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆615Updated 2 years ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,316Updated last year
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,384Updated last month
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,208Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆713Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,527Updated 11 months ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,321Updated last year
- [ICML 2023] Official PyTorch implementation of Global Context Vision Transformers☆438Updated last year
- MetaFormer Baselines for Vision (TPAMI 2024)☆474Updated last year
- [NeurIPS 2022] Official code for "Focal Modulation Networks"☆738Updated last year
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]☆914Updated last year
- [ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmen…☆477Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆774Updated last year
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,135Updated last year
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,344Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆660Updated 2 years ago
- [ICLR 2023 Oral] Image as Set of Points☆569Updated last year
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆795Updated last month