Zero-Shot Knowledge Distillation in Deep Networks in ICML2019
☆49Jun 20, 2019Updated 6 years ago
Alternatives and similar repositories for Zero-shot_Knowledge_Distillation
Users that are interested in Zero-shot_Knowledge_Distillation are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Google Colab tutorial with simple network training and Tensorboard.☆14Jul 17, 2019Updated 6 years ago
- ☆51Aug 8, 2019Updated 6 years ago
- Reproducing VID in CVPR2019 (on working)☆20Nov 25, 2019Updated 6 years ago
- Implementation of Autoslim using Tensorflow2☆11Jun 5, 2020Updated 5 years ago
- Zero-Shot Knowledge Distillation in Deep Networks☆67Apr 16, 2022Updated 3 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆265Nov 21, 2019Updated 6 years ago
- ☆17Mar 27, 2018Updated 8 years ago
- The codes for recent knowledge distillation algorithms and benchmark results via TF2.0 low-level API☆112Apr 6, 2022Updated 3 years ago
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆106Sep 9, 2019Updated 6 years ago
- ☆12Sep 30, 2022Updated 3 years ago
- Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning☆19Sep 20, 2022Updated 3 years ago
- ZSKD with PyTorch☆31Jun 26, 2023Updated 2 years ago
- Code Release for the CVPR 2020 (oral) paper, "Towards Inheritable Models for Open-set Domain Adaptation".☆11Jul 2, 2020Updated 5 years ago
- ActiveHARNet: Towards On-Device Deep Bayesian Active Learning for Human Activity Recognition☆16Nov 7, 2020Updated 5 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆99Nov 28, 2022Updated 3 years ago
- ☆10Dec 15, 2018Updated 7 years ago
- Generalize then Adapt: Source-free Domain Adaptation for Semantic Segmentation (ICCV 2021)☆10Oct 12, 2021Updated 4 years ago
- rp12 허브 입니다.☆24Feb 3, 2020Updated 6 years ago
- Data-enriching GAN for retrieving Representative Samples from aTrained Classifier☆14Sep 2, 2020Updated 5 years ago
- EmotiW 2018☆20Dec 25, 2018Updated 7 years ago
- [EMNLP 2021] MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity Representations☆31May 23, 2022Updated 3 years ago
- A small demo for training cnn with pytorch.☆11Dec 15, 2018Updated 7 years ago
- ☆61Apr 24, 2020Updated 5 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Codes for paper "Few Shot Network Compression via Cross Distillation", AAAI 2020.☆30Jan 31, 2020Updated 6 years ago
- Towards Optimal Structured CNN Pruning via Generative Adversarial Learning☆18Mar 23, 2019Updated 7 years ago
- Codes for DATA: Differentiable ArchiTecture Approximation.☆11Jul 22, 2021Updated 4 years ago
- Code for ECCV 2022 paper “Learning with Recoverable Forgetting”☆21Jul 27, 2022Updated 3 years ago
- Compression of Deep Neural Networks LeNet-300-100 and LeNet-5 trained on MNIST and CIFAR-10 using Quantization, Knowledge Distillation & …☆20Aug 22, 2019Updated 6 years ago
- Keras implementation of knowledge distillation(Hinton, et al. 2015)☆19Oct 13, 2018Updated 7 years ago
- Code for LIT, ICML 2019☆22Jun 11, 2019Updated 6 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Jan 5, 2021Updated 5 years ago
- Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"☆143Apr 29, 2020Updated 5 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ISD: Self-Supervised Learning by Iterative Similarity Distillation☆36Oct 12, 2021Updated 4 years ago
- An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction …☆16Oct 4, 2023Updated 2 years ago
- Role-Wise Data Augmentation for Knowledge Distillation☆19Nov 22, 2022Updated 3 years ago
- Codes for accepted paper "Cooperative Pruning in Cross-Domain Deep Neural Network Compression" in IJCAI 2019.☆12Aug 15, 2019Updated 6 years ago
- Compressing Representations for Self-Supervised Learning☆80Feb 18, 2021Updated 5 years ago
- Official pytorch Implementation of Relational Knowledge Distillation, CVPR 2019☆414May 17, 2021Updated 4 years ago
- A large scale study of Knowledge Distillation.☆220Apr 19, 2020Updated 5 years ago