[EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs
☆15Jul 18, 2024Updated last year
Alternatives and similar repositories for ApiQ
Users that are interested in ApiQ are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- PyTorch implementation of StableMask (ICML'24)☆15Jun 27, 2024Updated last year
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Oct 15, 2024Updated last year
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆31Oct 9, 2025Updated 7 months ago
- ☆21Oct 13, 2024Updated last year
- [CVPR 2025] Efficient Personalization of Quantized Diffusion Model without Backpropagation☆16Mar 31, 2025Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Low-Rank Llama Custom Training☆23Mar 27, 2024Updated 2 years ago
- Code and Data release for "Improving Multilingual Translation by Representation and Gradient Regularization" (Yang et al. EMNLP 2021), an…☆13Aug 12, 2024Updated last year
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆181Apr 24, 2026Updated 2 weeks ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- ☆20Feb 2, 2026Updated 3 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆40Jan 13, 2025Updated last year
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 9 months ago
- ☆10Apr 16, 2024Updated 2 years ago
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆35Mar 9, 2026Updated 2 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆39Feb 27, 2024Updated 2 years ago
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆29Dec 14, 2025Updated 4 months ago
- [ICML2025] LoRA fine-tune directly on the INT4 models.☆40Nov 25, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆171Nov 26, 2025Updated 5 months ago
- ICLR 2025☆31May 21, 2025Updated 11 months ago
- Awesome Low-Rank Adaptation☆60Apr 20, 2026Updated 2 weeks ago
- ☆25Oct 31, 2024Updated last year
- Tuning-Free Image Editing with Fidelity and Editability via Unified Latent Diffusion Model☆13Dec 29, 2024Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆45Jan 15, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A minimal re-implementation of orthogonal fine-tuning (OFT), a diffusion method, for LLMs. Based on nanoGPT and minLoRA.☆14Nov 17, 2023Updated 2 years ago
- [ICLR'25] ARB-LLM: Alternating Refined Binarizations for Large Language Models☆29Aug 5, 2025Updated 9 months ago
- ☆14May 4, 2024Updated 2 years ago
- [NeurIPS 2025] Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chains☆93Mar 27, 2026Updated last month
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆67Jul 6, 2025Updated 10 months ago
- Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)☆36Jan 18, 2025Updated last year
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆49Oct 10, 2024Updated last year
- Codebase for ICML submission "DOGE: Domain Reweighting with Generalization Estimation"☆21Feb 29, 2024Updated 2 years ago
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆56Jan 13, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Prune transformer layers☆74May 30, 2024Updated last year
- Exploring the Limitations of Large Language Models on Multi-Hop Queries☆33Mar 2, 2025Updated last year
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆22May 28, 2024Updated last year
- ☆11Feb 26, 2024Updated 2 years ago
- ☆53Jul 18, 2024Updated last year
- flex-block-attn: an efficient block sparse attention computation library☆130Dec 26, 2025Updated 4 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆45Feb 13, 2024Updated 2 years ago