[EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs
☆15Jul 18, 2024Updated last year
Alternatives and similar repositories for ApiQ
Users that are interested in ApiQ are comparing it to the libraries listed below
Sorting:
- ☆32Nov 11, 2024Updated last year
- [CVPR 2025] Efficient Personalization of Quantized Diffusion Model without Backpropagation☆15Mar 31, 2025Updated 11 months ago
- PyTorch implementation of StableMask (ICML'24)☆15Jun 27, 2024Updated last year
- ☆20Oct 13, 2024Updated last year
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆27Dec 14, 2025Updated 2 months ago
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 7 months ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated 11 months ago
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆28Oct 9, 2025Updated 5 months ago
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Oct 15, 2024Updated last year
- Low-Rank Llama Custom Training☆23Mar 27, 2024Updated last year
- ☆25Oct 31, 2024Updated last year
- [ICLR'25] ARB-LLM: Alternating Refined Binarizations for Large Language Models☆28Aug 5, 2025Updated 7 months ago
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆32Jan 28, 2026Updated last month
- ICLR 2025☆31May 21, 2025Updated 9 months ago
- Prune transformer layers☆74May 30, 2024Updated last year
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Oct 3, 2024Updated last year
- Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)☆34Jan 18, 2025Updated last year
- [ICML2025] LoRA fine-tune directly on the quantized models.☆39Nov 25, 2024Updated last year
- RL with Experience Replay☆55Jul 27, 2025Updated 7 months ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆81Jul 7, 2025Updated 8 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆38Jan 13, 2025Updated last year
- ☆38Dec 19, 2024Updated last year
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆38Sep 24, 2024Updated last year
- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference☆92Apr 26, 2025Updated 10 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆38Feb 27, 2024Updated 2 years ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 3 months ago
- Enable Next-sentence Prediction for Large Language Models with Faster Speed, Higher Accuracy and Longer Context☆41Aug 16, 2024Updated last year
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆36Feb 21, 2024Updated 2 years ago
- [NeurIPS 2025] Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chains☆83Jul 29, 2025Updated 7 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆44Jan 15, 2025Updated last year
- Code for Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar Subjects☆11Updated this week
- flex-block-attn: an efficient block sparse attention computation library☆124Dec 26, 2025Updated 2 months ago
- ☆52Jul 18, 2024Updated last year
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 11 months ago
- KAF : Kolmogorov-Arnold Fourier Networks☆20Feb 19, 2025Updated last year
- ☆40Jan 16, 2026Updated last month
- Code for paper "Concrete Subspace Learning based Interference Elimination for Multi-task Model Fusion"☆14Mar 28, 2024Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆47Jun 4, 2024Updated last year
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆45Jun 30, 2024Updated last year