yueb17 / PEMNLinks
☆20Updated 3 years ago
Alternatives and similar repositories for PEMN
Users that are interested in PEMN are comparing it to the libraries listed below
Sorting:
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆37Updated 3 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- [ICLR'21] Neural Pruning via Growing Regularization (PyTorch)☆82Updated 4 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆31Updated 3 years ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆67Updated 3 years ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆73Updated 3 years ago
- ☆28Updated 3 years ago
- This repository provides code for "On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness".☆46Updated 3 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆48Updated 3 years ago
- Implementation of HAT https://arxiv.org/pdf/2204.00993☆51Updated last year
- Pytorch implementation of our paper accepted by TPAMI 2023 — Lottery Jackpots Exist in Pre-trained Models☆35Updated 2 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Updated 4 months ago
- ☆89Updated 3 years ago
- ☆24Updated 2 years ago
- [ICDM 2023] Momentum is All You Need for Data-Driven Adaptive Optimization☆26Updated last year
- [ICCV 2021] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain☆80Updated 3 years ago
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆23Updated last year
- ☆17Updated 3 years ago
- ☆42Updated 2 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆76Updated 3 years ago
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆45Updated 3 years ago
- ☆31Updated 5 years ago
- ☆32Updated 3 years ago
- [Neurips 2022] “ Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”, Ziyu Jiang*, Xuxi Chen*, Xueqin Huan…☆20Updated 2 years ago
- ☆13Updated last year
- Code for ViTAS_Vision Transformer Architecture Search☆51Updated 4 years ago
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆14Updated 3 years ago
- an official PyTorch implementation of the paper "Partial Network Cloning", CVPR 2023☆13Updated 2 years ago
- Reproduction of "How Does Batch Normalization Help Optimization?" paper☆21Updated 6 years ago