☆821Oct 19, 2023Updated 2 years ago
Alternatives and similar repositories for VanillaNet
Users that are interested in VanillaNet are comparing it to the libraries listed below
Sorting:
- ☆40Oct 24, 2023Updated 2 years ago
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆76Nov 21, 2023Updated 2 years ago
- [CVPR 2023] Code for PConv and FasterNet☆813May 16, 2023Updated 2 years ago
- RepVGG: Making VGG-style ConvNets Great Again☆3,458Feb 10, 2023Updated 3 years ago
- Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.☆4,385Mar 15, 2025Updated 11 months ago
- Code release for ConvNeXt V2 model☆1,975Aug 14, 2024Updated last year
- Official repo of RepOptimizers and RepOpt-VGG☆268Feb 10, 2023Updated 3 years ago
- Code release for ConvNeXt model☆6,300Jan 8, 2023Updated 3 years ago
- Efficient computing methods developed by Huawei Noah's Ark Lab☆1,306Nov 5, 2024Updated last year
- [CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions☆2,793Mar 25, 2025Updated 11 months ago
- This repository contains the official implementation of the research paper, "An Improved One millisecond Mobile Backbone" CVPR 2023.☆817Jul 25, 2022Updated 3 years ago
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]☆1,109Aug 13, 2023Updated 2 years ago
- [ICCV 2023] Official PyTorch implementation of "Rethinking Mobile Block for Efficient Attention-based Models"☆254Oct 24, 2023Updated 2 years ago
- RepViT: Revisiting Mobile CNN From ViT Perspective [CVPR 2024] and RepViT-SAM: Towards Real-Time Segmenting Anything☆1,065Jun 14, 2024Updated last year
- Efficient vision foundation models for high-resolution generation and perception.☆3,249Sep 5, 2025Updated 6 months ago
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.☆3,262Sep 7, 2025Updated 5 months ago
- RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is la…☆210Jun 17, 2023Updated 2 years ago
- Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs (CVPR 2022)☆940Apr 24, 2024Updated last year
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,367Jun 1, 2024Updated last year
- Official Code of Paper "Reversible Column Networks" "RevColv2"☆265Sep 6, 2023Updated 2 years ago
- ResNeSt: Split-Attention Networks☆3,267Dec 9, 2022Updated 3 years ago
- [CVPR 2024] Deformable Convolution v4☆710May 17, 2024Updated last year
- MetaFormer Baselines for Vision (TPAMI 2024)☆495Jun 1, 2024Updated last year
- CVNets: A library for training computer vision networks☆1,967Oct 30, 2023Updated 2 years ago
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆906Jul 22, 2025Updated 7 months ago
- This is a collection of our NAS and Vision Transformer work.☆1,823Jul 25, 2024Updated last year
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆36,420Feb 26, 2026Updated last week
- This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!☆5,634Dec 19, 2025Updated 2 months ago
- This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".☆15,721Jul 24, 2024Updated last year
- YOLO-MS: Rethinking Multi-Scale Representation Learning for Real-Time Object Detection☆318Jun 19, 2025Updated 8 months ago
- ☆68Mar 19, 2024Updated last year
- [ICLR 2023] "More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity"; [ICML 2023] "Are Large Kernels Better Teachers…☆284Jul 5, 2023Updated 2 years ago
- OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.☆3,413Jul 14, 2024Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,648Aug 1, 2024Updated last year
- NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone…☆6,168Aug 8, 2024Updated last year
- OpenMMLab Model Compression Toolbox and Benchmark.☆1,662Jun 11, 2024Updated last year
- ☆812Apr 12, 2021Updated 4 years ago
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆524Mar 14, 2023Updated 2 years ago
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,592Dec 6, 2024Updated last year