iurada / px-ntk-pruning
Official repository of our work "Finding Lottery Tickets in Vision Models via Data-driven Spectral Foresight Pruning" accepted at CVPR 2024
☆21Updated 9 months ago
Alternatives and similar repositories for px-ntk-pruning:
Users that are interested in px-ntk-pruning are comparing it to the libraries listed below
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆59Updated 3 months ago
- Official implementation of the paper "Masked Autoencoders are Efficient Class Incremental Learners"☆39Updated 7 months ago
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆46Updated last year
- ☆26Updated 2 years ago
- The official repo for CVPR2023 highlight paper "Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization".☆80Updated last year
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆78Updated 11 months ago
- ☆25Updated last year
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆59Updated 8 months ago
- ☆16Updated 3 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆92Updated last year
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆37Updated last year
- [ICLR 2024] Improving Convergence and Generalization Using Parameter Symmetries☆29Updated 7 months ago
- [CVPR 2023 Highlight] Masked Image Modeling with Local Multi-Scale Reconstruction☆46Updated last year
- Official implementation for "Knowledge Distillation with Refined Logits".☆13Updated 4 months ago
- Official PyTorch Code for "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?" (https://arxiv.org/abs/2305.12954)☆46Updated last year
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆89Updated 9 months ago
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆32Updated 2 years ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆125Updated 2 months ago
- The offical implement of ImbSAM (Imbalanced-SAM)☆23Updated 10 months ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆19Updated last year
- Implementation of HAT https://arxiv.org/pdf/2204.00993☆48Updated 9 months ago
- The official github repo for "Test-Time Training with Masked Autoencoders"☆80Updated last year
- ☆41Updated 2 years ago
- ImageNet-1K data download, processing for using as a dataset☆77Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆95Updated 7 months ago
- Benchmarking Generalized Out-of-Distribution Detection with Vision-Language Models☆22Updated last month
- ☆42Updated last year
- Continual Forgetting for Pre-trained Vision Models (CVPR 2024)☆48Updated this week
- [ICCV 2023] CLR: Channel-wise Lightweight Reprogramming for Continual Learning☆29Updated 7 months ago
- Denoising Masked Autoencoders Help Robust Classification.☆60Updated last year