yuhuixu1993 / qa-loraView external linksLinks
Official PyTorch implementation of QA-LoRA
☆145Mar 13, 2024Updated last year
Alternatives and similar repositories for qa-lora
Users that are interested in qa-lora are comparing it to the libraries listed below
Sorting:
- ☆129Jan 22, 2024Updated 2 years ago
- ☆235Jun 11, 2024Updated last year
- PB-LLM: Partially Binarized Large Language Models☆156Nov 20, 2023Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆101May 30, 2023Updated 2 years ago
- QLoRA with Enhanced Multi GPU Support☆38Aug 8, 2023Updated 2 years ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆887Nov 26, 2025Updated 2 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆321Mar 4, 2025Updated 11 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated last year
- ☆553Feb 8, 2026Updated last week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago
- Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models☆25Sep 27, 2023Updated 2 years ago
- A simple and effective LLM pruning approach.☆848Aug 9, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,436Jul 17, 2025Updated 7 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Apr 15, 2024Updated last year
- ☆203Dec 5, 2024Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆913Dec 18, 2025Updated last month
- Serving multiple LoRA finetuned LLM as one☆1,139May 8, 2024Updated last year
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆483Nov 26, 2024Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,314May 11, 2025Updated 9 months ago
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134May 16, 2024Updated last year
- ☆577Oct 29, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,694Aug 14, 2024Updated last year
- ☆30Jul 22, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆812Mar 6, 2025Updated 11 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,256Mar 27, 2024Updated last year
- Full finetuning of large language models without large memory requirements☆94Sep 22, 2025Updated 4 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,705Jun 25, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,837Jun 10, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,669Apr 17, 2024Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,018Sep 4, 2024Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Jan 21, 2024Updated 2 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 4 months ago
- Reorder-based post-training quantization for large language model☆198May 17, 2023Updated 2 years ago
- [COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.c…☆29Mar 5, 2025Updated 11 months ago