peterljq / Tutorial-of-Data-Distillation-and-CondensationLinks
A comprehensive overview of Data Distillation and Condensation (DDC). DDC is a data-centric task where a representative (i.e., small but training-effective) batch of data is generated from the large dataset.
☆13Updated 2 years ago
Alternatives and similar repositories for Tutorial-of-Data-Distillation-and-Condensation
Users that are interested in Tutorial-of-Data-Distillation-and-Condensation are comparing it to the libraries listed below
Sorting:
- Experiments from "The Generalization-Stability Tradeoff in Neural Network Pruning": https://arxiv.org/abs/1906.03728.☆14Updated 4 years ago
- Code for Double Blind CollaborativeLearning (DBCL)☆14Updated 4 years ago
- codes for ICML2021 paper iDARTS: Differentiable Architecture Search with Stochastic Implicit Gradients☆10Updated 4 years ago
- ☆11Updated 2 years ago
- Minimum viable code for the Decodable Information Bottleneck paper. Pytorch Implementation.☆11Updated 4 years ago
- ☆26Updated 3 years ago
- Automated neural architecture search algorithms implemented in PyTorch and Autogluon toolkit.☆12Updated 5 years ago
- [ICLR 2022] "Sparsity Winning Twice: Better Robust Generalization from More Efficient Training" by Tianlong Chen*, Zhenyu Zhang*, Pengjun…☆39Updated 3 years ago
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Updated 2 years ago
- [ICML 2021] "Efficient Lottery Ticket Finding: Less Data is More" by Zhenyu Zhang*, Xuxi Chen*, Tianlong Chen*, Zhangyang Wang☆25Updated 3 years ago
- Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation☆45Updated 2 years ago
- Code for the paper "Pretrained Models for Multilingual Federated Learning" at NAACL 2022☆11Updated 3 years ago
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- ☆16Updated 3 years ago
- Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes☆23Updated 5 years ago
- Self-Distillation with weighted ground-truth targets; ResNet and Kernel Ridge Regression☆19Updated 3 years ago
- Paper List for In-context Learning 🌷☆20Updated 2 years ago
- kyleliang919 / Uncovering-the-Connections-BetweenAdversarial-Transferability-and-Knowledge-Transferabilitycode for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.☆17Updated 2 years ago
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆68Updated 2 years ago
- A pytorch implementation of the ICCV2021 workshop paper SimDis: Simple Distillation Baselines for Improving Small Self-supervised Models☆14Updated 4 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Updated 4 years ago
- Host CIFAR-10.2 Data Set☆13Updated 4 years ago
- Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning (ICML 2023)☆18Updated last year
- ☆13Updated 3 years ago
- Code for "Can We Characterize Tasks Without Labels or Features?" (CVPR 2021)☆11Updated 4 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Updated 2 years ago
- Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxi…☆68Updated 3 years ago
- Code accompanying the NeurIPS 2019 paper AutoAssist: A Framework to Accelerate Training of Deep Neural Networks.☆14Updated 2 years ago
- DiWA: Diverse Weight Averaging for Out-of-Distribution Generalization☆31Updated 2 years ago