facebookresearch / ToMeLinks
A method to increase the speed and lower the memory footprint of existing vision transformers.
☆1,166Updated last year
Alternatives and similar repositories for ToMe
Users that are interested in ToMe are comparing it to the libraries listed below
Sorting:
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆939Updated 2 years ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,052Updated last year
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆780Updated 3 years ago
- CLIP-like model evaluation☆800Updated 3 weeks ago
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆903Updated 6 months ago
- ☆705Updated 2 months ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,199Updated 2 years ago
- Robust fine-tuning of zero-shot models☆759Updated 3 years ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆807Updated 8 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,643Updated last year
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆427Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,233Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆763Updated 3 years ago
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]☆1,104Updated 2 years ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,175Updated last year
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]☆933Updated last year
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,343Updated 2 years ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,211Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆673Updated 3 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆807Updated last year
- MetaFormer Baselines for Vision (TPAMI 2024)☆497Updated last year
- [ICLR 2023 Oral] Image as Set of Points☆576Updated last year
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,468Updated 8 months ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,273Updated 3 years ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,354Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆722Updated 2 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,365Updated last year
- Grounded Language-Image Pre-training☆2,570Updated 2 years ago
- [ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmen…☆488Updated 2 years ago
- Low rank adaptation for Vision Transformer☆431Updated last year