xjjxmu / UniPTS
The official code for "UniPTS: A Unified Framework for Proficient Post-Training Sparsity" | [CVPR2024]
☆9Updated 3 months ago
Alternatives and similar repositories for UniPTS:
Users that are interested in UniPTS are comparing it to the libraries listed below
- Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" proposed by Pekin…☆67Updated 2 months ago
- MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer☆36Updated 4 months ago
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆101Updated 3 weeks ago
- Code release for VTW (AAAI 2025) Oral☆28Updated this week
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆58Updated 2 weeks ago
- A paper list of some recent works about Token Compress for Vit and VLM☆284Updated last week
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆263Updated 3 weeks ago
- 'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)☆221Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆31Updated 7 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆137Updated 3 months ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆61Updated 5 months ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆95Updated 7 months ago
- [CVPR'24] Official implementation of paper "FreeKD: Knowledge Distillation via Semantic Frequency Prompt".☆34Updated 9 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆109Updated 8 months ago
- a brief repo about paper research☆13Updated 4 months ago
- PyTorch implementation of PTQ4DiT https://arxiv.org/abs/2405.16005☆19Updated 2 months ago
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆163Updated last month
- [EMNLP 2024 Findings🔥] Official implementation of "LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Infe…☆88Updated 2 months ago
- ☆40Updated last month
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆89Updated 9 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆18Updated last week
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆108Updated 9 months ago
- One summary of efficient segment anything models☆86Updated 5 months ago
- ☆61Updated 2 months ago
- 📚 Collection of awesome generation acceleration resources.☆96Updated this week
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆125Updated 2 months ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆94Updated 6 months ago
- ImageNet-1K data download, processing for using as a dataset☆77Updated last year
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆44Updated last month
- ☆25Updated 7 months ago