xuguodong03 / UNIXKD
☆27Updated 2 years ago
Alternatives and similar repositories for UNIXKD:
Users that are interested in UNIXKD are comparing it to the libraries listed below
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated 2 years ago
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Updated 4 years ago
- This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothin…☆26Updated 2 years ago
- ☆57Updated 3 years ago
- [ICLR 2020] Haotao Wang, Tianlong Chen, Zhangyang Wang, Kede Ma, "I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifie…☆20Updated 3 years ago
- ☆30Updated 4 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- [TMLR] "Adversarial Feature Augmentation and Normalization for Visual Recognition", Tianlong Chen, Yu Cheng, Zhe Gan, Jianfeng Wang, Liju…☆20Updated 2 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2021 -- Network Pruning using Adaptive Exemplar Filters☆22Updated 3 years ago
- Pytorch implementation of our paper accepted by NeurIPS 2021 -- Revisiting Discriminator in GAN Compression: A Generator-discriminator Co…☆35Updated 3 years ago
- Official Codes and Pretrained Models for RecursiveMix☆22Updated last year
- WeightNet: Revisiting the Design Space of Weight Networks☆19Updated 4 years ago
- Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.☆56Updated 2 years ago
- A pytorch implementation of the ICCV2021 workshop paper SimDis: Simple Distillation Baselines for Improving Small Self-supervised Models☆14Updated 3 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆28Updated 3 years ago
- ☆22Updated 5 years ago
- This project is the Torch implementation of our accepted CVPR 2019 paper, Iterative Normalization: Beyond Standardization towards Effic…☆24Updated 4 years ago
- Code of our Neurips2020 paper "Auto Learning Attention", coming soon☆21Updated 3 years ago
- PyTorch implementation of "Deep Transferring Quantization" (ECCV2020)☆18Updated 2 years ago
- ☆20Updated last year
- ☆9Updated 3 years ago
- ☆13Updated 3 years ago
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆36Updated 7 months ago
- Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers☆26Updated 2 years ago
- Codebase for the paper "A Gradient Flow Framework for Analyzing Network Pruning"☆21Updated 4 years ago
- [NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong C…☆43Updated 3 years ago
- Paper and Code for "Curriculum Learning by Optimizing Learning Dynamics" (AISTATS 2021)☆19Updated 3 years ago
- ☆17Updated 2 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆24Updated 4 years ago