BaohaoLiao / ApiQView external linksLinks
[EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs
☆15Jul 18, 2024Updated last year
Alternatives and similar repositories for ApiQ
Users that are interested in ApiQ are comparing it to the libraries listed below
Sorting:
- ☆31Nov 11, 2024Updated last year
- [CVPR 2025] Efficient Personalization of Quantized Diffusion Model without Backpropagation☆15Mar 31, 2025Updated 10 months ago
- PyTorch implementation of StableMask (ICML'24)☆15Jun 27, 2024Updated last year
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆26Dec 14, 2025Updated 2 months ago
- ☆20Oct 13, 2024Updated last year
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆27Jul 15, 2025Updated 7 months ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆23Mar 16, 2025Updated 11 months ago
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Oct 15, 2024Updated last year
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆27Oct 9, 2025Updated 4 months ago
- Low-Rank Llama Custom Training☆23Mar 27, 2024Updated last year
- [ICLR'25] ARB-LLM: Alternating Refined Binarizations for Large Language Models☆28Aug 5, 2025Updated 6 months ago
- ☆25Oct 31, 2024Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆31Jan 28, 2026Updated 2 weeks ago
- ICLR 2025☆31May 21, 2025Updated 8 months ago
- Prune transformer layers☆74May 30, 2024Updated last year
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Oct 3, 2024Updated last year
- Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)☆34Jan 18, 2025Updated last year
- [ICML2025] LoRA fine-tune directly on the quantized models.☆39Nov 25, 2024Updated last year
- RL with Experience Replay☆55Jul 27, 2025Updated 6 months ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆37Sep 24, 2024Updated last year
- ☆37Dec 19, 2024Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆38Jan 13, 2025Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆82Jul 7, 2025Updated 7 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆38Feb 27, 2024Updated last year
- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference☆87Apr 26, 2025Updated 9 months ago
- [NeurIPS 2025] Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chains☆79Jul 29, 2025Updated 6 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 2 months ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆36Feb 21, 2024Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Feb 13, 2024Updated 2 years ago
- Enable Next-sentence Prediction for Large Language Models with Faster Speed, Higher Accuracy and Longer Context☆41Aug 16, 2024Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆44Jan 15, 2025Updated last year
- flex-block-attn: an efficient block sparse attention computation library☆108Dec 26, 2025Updated last month
- Code for Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar Subjects☆11Dec 19, 2025Updated last month
- ☆52Jul 18, 2024Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆46Jun 4, 2024Updated last year
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆46Jun 30, 2024Updated last year
- ☆11Dec 15, 2025Updated 2 months ago
- A minimal re-implementation of orthogonal fine-tuning (OFT) for LLMs. Based on nanoGPT and minLoRA.☆13Nov 17, 2023Updated 2 years ago
- [COLING 2025 Industry] LoRA Soups☆18Nov 29, 2024Updated last year