sail-sg / Adan
Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
☆760Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for Adan
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,054Updated 6 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆698Updated 6 months ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,295Updated 5 months ago
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆517Updated last month
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,062Updated 2 years ago
- MetaFormer Baselines for Vision (TPAMI 2024)☆421Updated 5 months ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆806Updated 2 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,218Updated 2 years ago
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]☆991Updated last year
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆970Updated 5 months ago
- Neighborhood Attention Extension. Bringing attention to a neighborhood near you!☆366Updated this week
- Code release for ConvNeXt V2 model☆1,529Updated 3 months ago
- An All-MLP solution for Vision, from Google AI☆1,003Updated 2 months ago
- ☆574Updated last week
- [ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmen…☆446Updated last year
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,041Updated 5 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆571Updated last week
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆872Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆679Updated 2 years ago
- This is a collection of our NAS and Vision Transformer work.☆1,689Updated 3 months ago
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆484Updated last year
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆927Updated 2 years ago
- Official PyTorch implementation of Fully Attentional Networks☆467Updated last year
- Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs (CVPR 2022)☆870Updated 6 months ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆500Updated 2 weeks ago
- [ECCV 2022]Code for paper "DaViT: Dual Attention Vision Transformer"☆330Updated 9 months ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆572Updated last year
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆782Updated 4 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆360Updated last year
- Implementation of Linformer for Pytorch☆257Updated 10 months ago