VITA-Group / BackRazor_Neurips22
[Neurips 2022] “ Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”, Ziyu Jiang*, Xuxi Chen*, Xueqin Huang, Xianzhi Du, Denny Zhou, Zhangyang Wang
☆19Updated last year
Alternatives and similar repositories for BackRazor_Neurips22:
Users that are interested in BackRazor_Neurips22 are comparing it to the libraries listed below
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆41Updated 10 months ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆32Updated last year
- ☆42Updated last year
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆52Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆14Updated 11 months ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆99Updated 8 months ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- Towards Meta-Pruning via Optimal Transport, ICLR 2024 (Spotlight)☆15Updated 2 months ago
- ☆49Updated last year
- [NeurIPS 2024] Search for Efficient LLMs☆12Updated last month
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆29Updated last month
- ☆19Updated 2 years ago
- ☆46Updated 2 months ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers.☆31Updated last month
- Code release for Deep Incubation (https://arxiv.org/abs/2212.04129)☆91Updated last year
- ☆30Updated 2 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated 2 years ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆34Updated 8 months ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆14Updated 10 months ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- PyTorch code for Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆37Updated 5 months ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆30Updated 2 years ago
- ☆12Updated last year
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆18Updated 8 months ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆46Updated last year
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆93Updated last year
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆69Updated 2 years ago
- ☆26Updated 2 years ago