OPTML-Group / Robust-MoE-CNNLinks
[ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang (Atlas) Wang, Sijia Liu
☆65Updated 2 years ago
Alternatives and similar repositories for Robust-MoE-CNN
Users that are interested in Robust-MoE-CNN are comparing it to the libraries listed below
Sorting:
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆70Updated last year
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆132Updated last year
- ImageNet-1K data download, processing for using as a dataset☆116Updated 2 years ago
- Official PyTorch(MMCV) implementation of “Adversarial AutoMixup” (ICLR 2024 spotlight)☆69Updated 11 months ago
- Official code for Scale Decoupled Distillation☆41Updated last year
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆194Updated 2 years ago
- [CVPR '24] Official implementation of the paper "Multiflow: Shifting Towards Task-Agnostic Vision-Language Pruning".☆23Updated 7 months ago
- ☆91Updated 2 years ago
- Awesome-Low-Rank-Adaptation☆116Updated last year
- Code for ICML 2024 paper (Oral) — Test-Time Model Adaptation with Only Forward Passes☆88Updated last year
- 'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)☆238Updated 2 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆149Updated 2 years ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆129Updated 2 months ago
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆108Updated 2 years ago
- Code for ICLR 2023 paper (Oral) — Towards Stable Test-Time Adaptation in Dynamic Wild World☆192Updated 2 years ago
- The offical implement of ImbSAM (Imbalanced-SAM)☆24Updated last year
- Implementation of AAAI 2022 Paper: Go wider instead of deeper☆32Updated 3 years ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆130Updated 11 months ago
- Low-Rank Rescaled Vision Transformer Fine-Tuning: A Residual Design Approach, CVPR 2024☆22Updated last year
- Official repository of our work "Finding Lottery Tickets in Vision Models via Data-driven Spectral Foresight Pruning" accepted at CVPR 20…☆25Updated 8 months ago
- 这里陈列了我编写的一些关于深度学习的画图工具,如果觉得有帮助可以给个star.☆42Updated 3 years ago
- The official repo for CVPR2023 highlight paper "Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization".☆84Updated 2 years ago
- The offical implementation of [NeurIPS2024] Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation https://ar…☆44Updated 10 months ago
- The official implementation for paper: Improving Knowledge Distillation via Regularizing Feature Norm and Direction☆22Updated 2 years ago
- Fine-tuning Vision Transformers on various classification datasets☆109Updated last year
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆249Updated 2 years ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆80Updated 7 months ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆90Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆83Updated 6 months ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆62Updated last year