Zero-Shot Knowledge Distillation in Deep Networks
☆67Apr 16, 2022Updated 3 years ago
Alternatives and similar repositories for ZSKD
Users that are interested in ZSKD are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Zero-Shot Knowledge Distillation in Deep Networks in ICML2019☆49Jun 20, 2019Updated 6 years ago
- Code for LIT, ICML 2019☆22Jun 11, 2019Updated 6 years ago
- Role-Wise Data Augmentation for Knowledge Distillation☆19Nov 22, 2022Updated 3 years ago
- Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"☆143Apr 29, 2020Updated 5 years ago
- ZSKD with PyTorch☆31Jun 26, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Generalize then Adapt: Source-free Domain Adaptation for Semantic Segmentation (ICCV 2021)☆10Oct 12, 2021Updated 4 years ago
- Codes for paper "Few Shot Network Compression via Cross Distillation", AAAI 2020.☆30Jan 31, 2020Updated 6 years ago
- Code release for "From Image Collections to Point Clouds with Self-supervised Shape and Pose Networks" (CVPR 2020)☆28May 6, 2020Updated 5 years ago
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆99Nov 28, 2022Updated 3 years ago
- Just some experiments on GANs hallucinating data samples for an incremental learner.☆18Jul 17, 2017Updated 8 years ago
- Knowledge Extraction with No Observable Data (NeurIPS 2019)☆46Jan 9, 2020Updated 6 years ago
- Class Balancing GAN with a Classifier In The Loop (UAI 2021)☆12Feb 11, 2022Updated 4 years ago
- Official PyTorch implementation of Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion (CVPR 2020)☆516Jan 25, 2023Updated 3 years ago
- A pytorch implement of scalable neural netowrks.☆23Jun 9, 2020Updated 5 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A small demo for training cnn with pytorch.☆11Dec 15, 2018Updated 7 years ago
- Code Release for the CVPR 2020 (oral) paper, "Towards Inheritable Models for Open-set Domain Adaptation".☆11Jul 2, 2020Updated 5 years ago
- Compressing Representations for Self-Supervised Learning☆80Feb 18, 2021Updated 5 years ago
- Towards Optimal Structured CNN Pruning via Generative Adversarial Learning☆18Mar 23, 2019Updated 7 years ago
- ☆25Jul 11, 2019Updated 6 years ago
- Coarse-to-Fine Curriculum Learning☆20Apr 28, 2020Updated 5 years ago
- Command line tool to block websites that distract you and activate focus mode.☆18Jun 2, 2022Updated 3 years ago
- AutoGrow: Automatic Layer Growing in Deep Convolutional Networks (KDD 2020)☆39Jun 10, 2019Updated 6 years ago
- A large scale study of Knowledge Distillation.☆220Apr 19, 2020Updated 5 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Code for "EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis" https://arxiv.org/abs/1905.05934☆113Mar 3, 2020Updated 6 years ago
- Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net…☆50Feb 28, 2023Updated 3 years ago
- ISD: Self-Supervised Learning by Iterative Similarity Distillation☆36Oct 12, 2021Updated 4 years ago
- An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction …☆16Oct 4, 2023Updated 2 years ago
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆265Nov 21, 2019Updated 6 years ago
- Using Teacher Assistants to Improve Knowledge Distillation: https://arxiv.org/pdf/1902.03393.pdf☆264Oct 3, 2019Updated 6 years ago
- Active attention in classification networks that is optimised at the time of model training.☆11Nov 9, 2018Updated 7 years ago
- Codes for accepted paper "Cooperative Pruning in Cross-Domain Deep Neural Network Compression" in IJCAI 2019.☆12Aug 15, 2019Updated 6 years ago
- A general framework for super-resolution tasks.☆12Aug 15, 2018Updated 7 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆37Jun 21, 2022Updated 3 years ago
- [ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methods☆2,428Oct 16, 2023Updated 2 years ago
- Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。☆2,657May 30, 2023Updated 2 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Mar 24, 2023Updated 3 years ago
- ☆45Jan 17, 2020Updated 6 years ago
- Learning What and Where to Transfer (ICML 2019)☆249Oct 20, 2020Updated 5 years ago
- Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes☆23Jun 14, 2020Updated 5 years ago