OPTML-Group / Robust-MoE-CNNLinks
[ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang (Atlas) Wang, Sijia Liu
☆66Updated 2 years ago
Alternatives and similar repositories for Robust-MoE-CNN
Users that are interested in Robust-MoE-CNN are comparing it to the libraries listed below
Sorting:
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆71Updated last year
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆135Updated last year
- Official PyTorch(MMCV) implementation of “Adversarial AutoMixup” (ICLR 2024 spotlight)☆71Updated last year
- ImageNet-1K data download, processing for using as a dataset☆126Updated 2 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆154Updated 3 years ago
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆198Updated 2 years ago
- ☆92Updated 2 years ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆94Updated last year
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆137Updated 4 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆95Updated 8 months ago
- Convolutional Initialization for Data-Efficient Vision Transformers☆16Updated last month
- Code for ICML 2024 paper (Oral) — Test-Time Model Adaptation with Only Forward Passes☆92Updated last year
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆104Updated last year
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆136Updated last year
- Code for ICLR 2023 paper (Oral) — Towards Stable Test-Time Adaptation in Dynamic Wild World☆201Updated 2 years ago
- Official code for Scale Decoupled Distillation☆43Updated last year
- 'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)☆241Updated 2 years ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆82Updated 9 months ago
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆108Updated 2 years ago
- Code for ICML 2022 paper — Efficient Test-Time Model Adaptation without Forgetting☆135Updated 2 years ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆250Updated 2 years ago
- The official implementation of paper: "Inter-Instance Similarity Modeling for Contrastive Learning"☆117Updated last year
- Awesome-Low-Rank-Adaptation☆126Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆105Updated last year
- The official repo for CVPR2023 highlight paper "Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization".☆84Updated 2 years ago
- The offical implement of ImbSAM (Imbalanced-SAM)☆25Updated last year
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆80Updated 10 months ago
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- Low rank adaptation for Vision Transformer☆428Updated last year
- ☆28Updated 2 years ago