☆235Jun 11, 2024Updated last year
Alternatives and similar repositories for LoftQ
Users that are interested in LoftQ are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official PyTorch implementation of QA-LoRA☆146Mar 13, 2024Updated 2 years ago
- ☆129Jan 22, 2024Updated 2 years ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Apr 15, 2024Updated 2 years ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆39Sep 24, 2024Updated last year
- PB-LLM: Partially Binarized Large Language Models☆155Nov 20, 2023Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆892Nov 26, 2025Updated 4 months ago
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆717Aug 13, 2024Updated last year
- ☆588Oct 29, 2024Updated last year
- The official implementation of the EMNLP 2023 paper LLM-FP4☆222Dec 15, 2023Updated 2 years ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆336Updated this week
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Apr 28, 2024Updated last year
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Oct 17, 2022Updated 3 years ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆136May 16, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆170Nov 26, 2025Updated 4 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆928Feb 26, 2026Updated last month
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆501Nov 26, 2024Updated last year
- The official implementation of the DAC 2024 paper GQA-LUT☆22Dec 20, 2024Updated last year
- Evaluation Code repository for the paper "ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers". (2023…☆13Dec 5, 2023Updated 2 years ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆229Jan 11, 2025Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆324Mar 4, 2025Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆69Mar 7, 2024Updated 2 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆52Nov 5, 2024Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆95Sep 4, 2024Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆418Aug 13, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆384Nov 20, 2025Updated 4 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆419Jun 30, 2025Updated 9 months ago
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆877Aug 20, 2024Updated last year
- ☆232Jun 24, 2024Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆459Jan 16, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆826Mar 6, 2025Updated last year
- A simple and effective LLM pruning approach.☆860Aug 9, 2024Updated last year
- Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models☆25Sep 27, 2023Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆472Apr 21, 2024Updated last year
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆39Aug 20, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,498Jul 17, 2025Updated 8 months ago