i13abe / Triplet-Loss-for-Knowledge-DistillationLinks
Triplet Loss for Knowledge Distillation
☆18Updated 2 years ago
Alternatives and similar repositories for Triplet-Loss-for-Knowledge-Distillation
Users that are interested in Triplet-Loss-for-Knowledge-Distillation are comparing it to the libraries listed below
Sorting:
- ☆15Updated 3 years ago
- Official Code of Paper HoMM: Higher-order Moment Matching for Unsupervised Domain Adaptation (AAAI2020)☆44Updated 5 years ago
- An efficient implementation for ImageNet classification☆17Updated 4 years ago
- [CVPR2021] "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai☆98Updated 3 years ago
- The official project for CVPR19 paper: Domain-Symmetric Networks for Adversarial Domain Adaptation☆85Updated 4 years ago
- Collect and Select: Semantic Alignment Metric Learning for Few-shot Learning☆20Updated 5 years ago
- Code release for "Transferable Normalization: Towards Improving Transferability of Deep Neural Networks" (NeurIPS 2019)☆79Updated 4 years ago
- Learning Metrics from Teachers: Compact Networks for Image Embedding (CVPR19)☆76Updated 6 years ago
- (CVPR-Oral 2021) PyTorch implementation of Knowledge Evolution approach and Split-Nets☆83Updated 3 years ago
- Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification☆82Updated 3 years ago
- Code for paper "Unsupervised Domain Adaptation using Feature-Whitening and Consensus Loss" (CVPR 2019)☆64Updated 4 years ago
- This is the pytorch re-implementation of the IterNorm☆41Updated 6 years ago
- (CVPR 2020) Revisiting Pose-Normalization for Fine-Grained Few-Shot Recognition☆41Updated 4 years ago
- This is the repo for the paper "Episodic Training for Domain Generalization" https://arxiv.org/abs/1902.00113☆57Updated last year
- ☆51Updated 5 years ago
- PyTorch implementation of Weighted Batch-Normalization layers☆37Updated 4 years ago
- The Pytorch code of "Distribution Consistency based Covariance Metric Networks for Few-shot Learning", AAAI 2019.☆57Updated 4 years ago
- Lifelong Learning via Progressive Distillation and Retrospection☆14Updated 6 years ago
- code for "Training Interpretable Convolutional NeuralNetworks by Differentiating Class-specific Filters"☆27Updated last year
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆29Updated 4 years ago
- TPAMI2020 "Unsupervised Multi-Class Domain Adaptation: Theory, Algorithms, and Practice"☆75Updated 4 years ago
- [ICCV2019] Attract or Distract: Exploit the Margin of Open Set☆35Updated 4 years ago
- ☆61Updated 5 years ago
- Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)☆71Updated 5 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆24Updated 5 years ago
- Trained model weights, training and evaluation code from the paper "A simple way to make neural networks robust against diverse image cor…☆61Updated 2 years ago
- Code for 'Open Set Domain Adaptation by Backpropagation'☆75Updated 6 years ago
- [TIP 2022] Towards Better Accuracy-efficiency Trade-offs: Divide and Co-training. Plus, an image classification toolbox includes ResNet, …☆104Updated 2 years ago
- PyTorch code for the paper: "FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning"☆50Updated 4 years ago
- [CVPR2019] NDDR-CNN: Layerwise Feature Fusing in Multi-Task CNNs by Neural Discriminative Dimensionality Reduction☆54Updated 5 years ago