[ICLR'25] ARB-LLM: Alternating Refined Binarizations for Large Language Models
☆28Aug 5, 2025Updated 6 months ago
Alternatives and similar repositories for ARB-LLM
Users that are interested in ARB-LLM are comparing it to the libraries listed below
Sorting:
- ☆23Jul 14, 2025Updated 7 months ago
- PyTorch code for our paper "Progressive Binarization with Semi-Structured Pruning for LLMs"☆13Sep 28, 2025Updated 5 months ago
- [ICCV 2025] QuantCache:Adaptive Importance-Guided Quantization with Hierarchical Latent and Layer Caching for Video Generation☆15Sep 26, 2025Updated 5 months ago
- ☆14May 5, 2025Updated 9 months ago
- ☆19Mar 11, 2025Updated 11 months ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆72Mar 25, 2025Updated 11 months ago
- ☆16May 27, 2025Updated 9 months ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 3 months ago
- This repository contains low-bit quantization papers from 2020 to 2025 on top conference.☆103Feb 13, 2026Updated 2 weeks ago
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 11 months ago
- ☆11Apr 3, 2023Updated 2 years ago
- PyTorch code for our paper "Dog-IQA: Standard-guided Zero-shot MLLM for Mix-grain Image Quality Assessment"☆28Oct 7, 2024Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆15Feb 4, 2025Updated last year
- [ICCV 2021] Code release for "Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks"☆32Jul 24, 2022Updated 3 years ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆37Aug 20, 2024Updated last year
- [ICLR25] STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs☆18Jun 3, 2025Updated 9 months ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Mar 4, 2024Updated 2 years ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- [NeurIPS'25] OSCAR: One-Step Diffusion Codec Across Multiple Bit-rates☆33Oct 20, 2025Updated 4 months ago
- ☆15Jan 7, 2026Updated last month
- ☆19Apr 3, 2025Updated 11 months ago
- The code for the Network Binarization via Contrastive Learning, which has been accepted to ECCV 2022.☆14Jul 13, 2022Updated 3 years ago
- PyTorch code for our paper "2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution"☆47Oct 24, 2024Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆47Jun 4, 2024Updated last year
- Xformer: Hybrid X-Shaped Transformer for Image Denoising☆55Feb 20, 2024Updated 2 years ago
- ☆23Nov 26, 2024Updated last year
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆23Mar 16, 2025Updated 11 months ago
- ☆54Oct 30, 2025Updated 4 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆98Jan 3, 2025Updated last year
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆26Oct 4, 2024Updated last year
- [ICCV2023 Official PyTorch code] for Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution☆29Mar 10, 2024Updated last year
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆228Jan 11, 2025Updated last year
- Code repo for the paper BiT Robustly Binarized Multi-distilled Transformer☆114Jun 26, 2023Updated 2 years ago
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆35Mar 6, 2025Updated 11 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Oct 3, 2024Updated last year
- ☆30Jul 22, 2024Updated last year
- ☆32Nov 11, 2024Updated last year