wujx2001 / QwTLinks
Official PyTorch implementation of QwT—“Quantization without Tears” (CVPR 2025): fast, accurate, and hassle-free post-training network quantization with lightweight linear compensation layers.
☆31Updated 4 months ago
Alternatives and similar repositories for QwT
Users that are interested in QwT are comparing it to the libraries listed below
Sorting:
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Sp…☆237Updated last month
- A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AI☆315Updated this week
- 📚 Collection of token-level model compression resources.☆190Updated 5 months ago
- [AAAI-2025] The offical code for SiTo (Similarity-based Token Pruning for Stable Diffusion Models)☆43Updated 8 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆95Updated last year
- [NeurIPS 2025] Official code for paper: Beyond Attention or Similarity: Maximizing Conditional Diversity for Token Pruning in MLLMs.☆86Updated 4 months ago
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆134Updated last year
- MoH: Multi-Head Attention as Mixture-of-Head Attention☆302Updated last year
- [CVPR 2025] PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models☆55Updated last week
- [CVPR'24] Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression☆15Updated last year
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆69Updated 3 weeks ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆141Updated 11 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆84Updated 3 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆106Updated 7 months ago
- [ICLR 2025] MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts☆266Updated last year
- a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity☆42Updated 8 months ago
- [CVPR 2024 Highlight] Logit Standardization in Knowledge Distillation☆391Updated last year
- [CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models☆65Updated 2 months ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Updated 4 months ago
- This is a project about visual spatial reasoning.☆89Updated last month
- Official repository for VisionZip (CVPR 2025)☆405Updated 6 months ago
- [ICML2025] Official Code of From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection☆25Updated 7 months ago
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆61Updated last week
- A paper list of some recent works about Token Compress for Vit and VLM☆828Updated this week
- PyTorch implementation of PTQ4DiT https://arxiv.org/abs/2405.16005☆45Updated last year
- [ICML 2025] This is the official PyTorch implementation of "🎵 HarmoniCa: Harmonizing Training and Inference for Better Feature Caching i…☆44Updated 7 months ago
- ☆64Updated 3 weeks ago
- [TMLR 2026] Survey: https://arxiv.org/pdf/2507.20198☆299Updated this week
- [NeurIPS 2025] HoliTom: Holistic Token Merging for Fast Video Large Language Models☆70Updated 4 months ago
- OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models☆51Updated last week