(CVPR-Oral 2021) PyTorch implementation of Knowledge Evolution approach and Split-Nets
☆83Oct 9, 2021Updated 4 years ago
Alternatives and similar repositories for knowledge_evolution
Users that are interested in knowledge_evolution are comparing it to the libraries listed below
Sorting:
- ☆27Feb 6, 2021Updated 5 years ago
- Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)☆103Apr 30, 2024Updated last year
- Source Code for "Dual-Level Knowledge Distillation via Knowledge Alignment and Correlation", TNNLS, https://ieeexplore.ieee.org/abstract/…☆12Dec 21, 2022Updated 3 years ago
- This repository implements the paper "Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations"☆20Aug 30, 2021Updated 4 years ago
- Minimal implementation of adaptive gradient clipping (https://arxiv.org/abs/2102.06171) in TensorFlow 2.☆86Jun 16, 2021Updated 4 years ago
- [NeurIPS 2020] Balanced Meta-Softmax for Long-Tailed Visual Recognition☆145Nov 22, 2021Updated 4 years ago
- This is an official implementation of our CVPR 2020 paper "Non-Local Neural Networks With Grouped Bilinear Attentional Transforms".☆12Jan 30, 2021Updated 5 years ago
- ☆27Dec 13, 2022Updated 3 years ago
- Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification☆82Jun 9, 2021Updated 4 years ago
- Code for paper 'Minimizing FLOPs to Learn Efficient Sparse Representations' published at ICLR 2020☆20Feb 14, 2020Updated 6 years ago
- ☆19Jan 27, 2021Updated 5 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆31Nov 11, 2021Updated 4 years ago
- Code for paper "PR Product: A Substitute for Inner Product in Neural Networks", containing PR-FC, PR-CNN and PR-LSTM.☆19Jun 6, 2020Updated 5 years ago
- (CVPR2022) Official PyTorch Implementation of KDEP. Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-eff…☆60Jul 21, 2022Updated 3 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆78Jul 29, 2024Updated last year
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Aug 19, 2020Updated 5 years ago
- [ICCV 2023] Subclass-balancing contrastive learning for long-tailed recognition☆18Oct 30, 2023Updated 2 years ago
- [ECCV2022] The PyTorch implementation of paper "Equivariance and Invariance Inductive Bias for Learning from Insufficient Data"☆19Oct 12, 2022Updated 3 years ago
- [CVPR21] Dynamic Class Queue for Large Scale Face Recognition in the Wild☆61Sep 3, 2021Updated 4 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆76Jul 6, 2023Updated 2 years ago
- [ECCV2020] Knowledge Distillation Meets Self-Supervision☆237Dec 15, 2022Updated 3 years ago
- Code for our paper: Samuel and Chechik, "Distributional Robustness Loss for Long-tail Learning"☆32Jan 2, 2022Updated 4 years ago
- Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。☆2,655May 30, 2023Updated 2 years ago
- ☆20Jul 19, 2022Updated 3 years ago
- The official code for the paper Evolved Part Masking for Self-Supervised Learning.☆16Jun 14, 2023Updated 2 years ago
- Distilling Knowledge via Knowledge Review, CVPR 2021☆279Dec 16, 2022Updated 3 years ago
- Official Implementation of the detection self-distillation framework LGD.☆53Apr 19, 2022Updated 3 years ago
- Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)☆118Nov 23, 2021Updated 4 years ago
- LowFER: Low-rank Bilinear Pooling for Link Prediction (ICML 2020)☆13Sep 24, 2022Updated 3 years ago
- ISD: Self-Supervised Learning by Iterative Similarity Distillation☆36Oct 12, 2021Updated 4 years ago
- ICML'20: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust☆17Dec 14, 2020Updated 5 years ago
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆68Dec 17, 2022Updated 3 years ago
- Graph Knowledge Distillation☆13Mar 6, 2020Updated 6 years ago
- Stochastic Optimization for Global Contrastive Learning without Large Mini-batches☆20Mar 31, 2023Updated 2 years ago
- Improving Contrastive Learning by Visualizing Feature Transformation, ICCV 2021 Oral☆90Oct 11, 2021Updated 4 years ago
- Official pytorch implementation for CVPR2022 paper "Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training"☆18Apr 11, 2022Updated 3 years ago
- Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein…☆25Oct 28, 2021Updated 4 years ago
- This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothin…☆28Oct 19, 2022Updated 3 years ago
- ☆20Sep 28, 2020Updated 5 years ago