Cheliosoops / BitQLinks
☆10Updated last year
Alternatives and similar repositories for BitQ
Users that are interested in BitQ are comparing it to the libraries listed below
Sorting:
- The official implementation of "NAS-BNN: Neural Architecture Search for Binary Neural Networks"☆12Updated last year
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆25Updated last year
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆66Updated last year
- [ECCV 2022] SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning☆20Updated 3 years ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆109Updated last month
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆55Updated 4 months ago
- ☆15Updated 8 months ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Updated last year
- ☆16Updated last year
- TerDiT: Ternary Diffusion Models with Transformers☆71Updated last year
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆72Updated last year
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality…☆53Updated 7 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆50Updated 5 months ago
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆107Updated last month
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆22Updated last year
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆119Updated 6 months ago
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆14Updated 11 months ago
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆66Updated 8 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆36Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆27Updated 3 months ago
- (NeurIPS 2024) BiDM: Pushing the Limit of Quantization for Diffusion Models☆22Updated last year
- The official repo of continuous speculative decoding☆30Updated 7 months ago
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 9 months ago
- PyTorch implementation of quantization-aware matrix factorization (QMF) for data compression☆14Updated 4 months ago
- A framework to compare low-bit integer and float-point formats☆42Updated 3 weeks ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Updated last year
- NeuMeta transforms neural networks by allowing a single model to adapt on the fly to different sizes, generating the right weights when n…☆43Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- [CVPR 2025 Highlight] TinyFusion: Diffusion Transformers Learned Shallow☆145Updated 7 months ago
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆60Updated 4 months ago