sseung0703 / SSKD_SVDView external linksLinks
☆51Aug 8, 2019Updated 6 years ago
Alternatives and similar repositories for SSKD_SVD
Users that are interested in SSKD_SVD are comparing it to the libraries listed below
Sorting:
- Knowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)☆265Nov 21, 2019Updated 6 years ago
- Zero-Shot Knowledge Distillation in Deep Networks in ICML2019☆49Jun 20, 2019Updated 6 years ago
- ☆17Mar 27, 2018Updated 7 years ago
- Google Colab tutorial with simple network training and Tensorboard.☆14Jul 17, 2019Updated 6 years ago
- Implementation of Autoslim using Tensorflow2☆11Jun 5, 2020Updated 5 years ago
- The codes for recent knowledge distillation algorithms and benchmark results via TF2.0 low-level API☆112Apr 6, 2022Updated 3 years ago
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)☆106Sep 9, 2019Updated 6 years ago
- Source code for the BMVC-2021 paper "SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation".☆16Jan 20, 2022Updated 4 years ago
- ☆16Apr 20, 2020Updated 5 years ago
- ☆137Oct 22, 2018Updated 7 years ago
- Source code for 'Knowledge Distillation via Instance Relationship Graph'☆30Jun 13, 2019Updated 6 years ago
- [ECCV2020] Knowledge Distillation Meets Self-Supervision☆238Dec 15, 2022Updated 3 years ago
- Official pytorch Implementation of Relational Knowledge Distillation, CVPR 2019☆414May 17, 2021Updated 4 years ago
- Codes for DATA: Differentiable ArchiTecture Approximation.☆11Jul 22, 2021Updated 4 years ago
- Compressing Representations for Self-Supervised Learning☆80Feb 18, 2021Updated 4 years ago
- TensorFlow Implementation of Several Zero-Shot Image Style Transfer Methods☆15Sep 30, 2017Updated 8 years ago
- Grassmannian Optimization for Tensor Completion and Tracking in the t-SVD Algebra☆12Oct 7, 2025Updated 4 months ago
- Code for LIT, ICML 2019☆20Jun 11, 2019Updated 6 years ago
- SKD : Self-supervised Knowledge Distillation for Few-shot Learning☆102Oct 3, 2023Updated 2 years ago
- Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning☆19Sep 20, 2022Updated 3 years ago
- BING++: A Fast High Quality Object Proposal Generator at 100fps☆16May 18, 2016Updated 9 years ago
- ☆20Oct 22, 2021Updated 4 years ago
- ☆31Sep 20, 2019Updated 6 years ago
- ☆17Nov 4, 2022Updated 3 years ago
- Triplet Loss for Knowledge Distillation☆18Sep 4, 2022Updated 3 years ago
- ☆23Oct 20, 2020Updated 5 years ago
- Getting Starting with NIMBUS-CORE☆10Dec 16, 2023Updated 2 years ago
- spatio-temporal tasks☆15Jul 15, 2024Updated last year
- ☆23Jun 8, 2019Updated 6 years ago
- Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"☆336Jul 25, 2024Updated last year
- Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"☆196Feb 21, 2020Updated 5 years ago
- ☆19Jan 20, 2020Updated 6 years ago
- Small scale experiments with group normalization☆58Apr 4, 2018Updated 7 years ago
- Deep Metric Transfer for Label Propagation with Limited Annotated Data☆50Jun 3, 2023Updated 2 years ago
- Improving Convolutional Networks via Attention Transfer (ICLR 2017)☆1,463Jul 11, 2018Updated 7 years ago
- Code release for Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning (NeurIPS 2019)☆24Nov 29, 2021Updated 4 years ago
- ICON Lab @ Bilkent University☆20Feb 27, 2022Updated 3 years ago
- ☆23Sep 12, 2019Updated 6 years ago
- The official code for the paper 'Structured Knowledge Distillation for Semantic Segmentation'. (CVPR 2019 ORAL) and extension to other ta…☆740Apr 20, 2020Updated 5 years ago