roymiles / ITRD
[BMVC 2022] Information Theoretic Representation Distillation
☆18Updated last year
Alternatives and similar repositories for ITRD:
Users that are interested in ITRD are comparing it to the libraries listed below
- [NeurIPS'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization☆29Updated 5 months ago
- [CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong C…☆25Updated 2 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆19Updated last year
- Source code for the BMVC-2021 paper "SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation".☆16Updated 3 years ago
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆17Updated 5 months ago
- This repository contains the code for our CVPR 2022 paper on "Non-isotropy Regularization for Proxy-based Deep Metric Learning".☆14Updated last year
- [CVPR 2024] VkD : Improving Knowledge Distillation using Orthogonal Projections☆50Updated 4 months ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated 2 years ago
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆36Updated 7 months ago
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated this week
- Robustness via Cross-Domain Ensembles, ICCV 2021 [Oral]☆38Updated 3 years ago
- ISD: Self-Supervised Learning by Iterative Similarity Distillation☆36Updated 3 years ago
- [WACV2023] This is the official PyTorch impelementation of our paper "[Rethinking Rotation in Self-Supervised Contrastive Learning: Adapt…☆12Updated 2 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated 2 years ago
- ☆12Updated 4 months ago
- Repository containing code for blockwise SSL training☆28Updated 4 months ago
- Code implementation for paper "On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals".☆16Updated 3 years ago
- [ICLR 2023] “ Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Better Representations”, Ziyu Jian…☆24Updated 2 years ago
- A pytorch implementation of the ICCV2021 workshop paper SimDis: Simple Distillation Baselines for Improving Small Self-supervised Models☆14Updated 3 years ago
- ☆9Updated 3 years ago
- Bag of Instances Aggregation Boosts Self-supervised Distillation (ICLR 2022)☆33Updated 2 years ago
- Official Codes and Pretrained Models for RecursiveMix☆22Updated last year
- ☆42Updated last year
- Official implementation for "Knowledge Distillation with Refined Logits".☆13Updated 6 months ago
- ☆10Updated 4 years ago
- an official PyTorch implementation of the paper "Partial Network Cloning", CVPR 2023☆13Updated last year
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- 🔥MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer [Official, ICLR 2023]☆21Updated last year
- i-mae Pytorch Repo☆20Updated 10 months ago
- Code of CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping☆17Updated 2 years ago