iurada / px-ntk-pruningLinks
Official repository of our work "Finding Lottery Tickets in Vision Models via Data-driven Spectral Foresight Pruning" accepted at CVPR 2024
☆24Updated 4 months ago
Alternatives and similar repositories for px-ntk-pruning
Users that are interested in px-ntk-pruning are comparing it to the libraries listed below
Sorting:
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆60Updated last year
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆88Updated last year
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆66Updated 9 months ago
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆176Updated 7 months ago
- ImageNet-1K data download, processing for using as a dataset☆100Updated 2 years ago
- The official implementation for paper: Improving Knowledge Distillation via Regularizing Feature Norm and Direction☆21Updated last year
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆61Updated last year
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆99Updated 2 years ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆247Updated 2 years ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆129Updated 8 months ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆101Updated last year
- 这里陈列了我编写的一些关于深度学习的画图工具,如果觉得有帮助可以给个star.☆41Updated 2 years ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆79Updated 3 months ago
- The official repo for CVPR2023 highlight paper "Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization".☆83Updated 2 years ago
- Awesome-Low-Rank-Adaptation☆110Updated 9 months ago
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆72Updated last year
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆94Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆106Updated last year
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆46Updated 6 months ago
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆55Updated last week
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆147Updated 2 years ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆44Updated 2 years ago
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆128Updated last year
- Recent Advances on Efficient Vision Transformers☆51Updated 2 years ago
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆104Updated last year
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆98Updated 3 years ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆82Updated last year
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆191Updated last year
- ☆90Updated 2 years ago