Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
☆25Sep 27, 2023Updated 2 years ago
Alternatives and similar repositories for qa-lora
Users that are interested in qa-lora are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official PyTorch implementation of QA-LoRA☆145Mar 13, 2024Updated 2 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- ☆15Apr 11, 2024Updated last year
- ☆20Dec 5, 2024Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Apr 15, 2024Updated last year
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆49Nov 5, 2024Updated last year
- ☆52Nov 5, 2024Updated last year
- Streamlines the creation of dataset to train a Large Language Model with triplets : instruction-input-output . The default configuration …☆13Apr 17, 2023Updated 2 years ago
- ViTALiTy (HPCA'23) Code Repository☆23Mar 13, 2023Updated 3 years ago
- ☆17Apr 11, 2024Updated last year
- Fork of Flame repo for training of some new stuff in development☆19Mar 17, 2026Updated last week
- ☆15Sep 24, 2023Updated 2 years ago
- ☆25Oct 31, 2024Updated last year
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- Official repository of the paper: Marking Code Without Breaking It: Code Watermarking for Detecting LLM-Generated Code (Findings of EACL …☆12Feb 11, 2026Updated last month
- [COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.c…☆29Mar 5, 2025Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆101May 30, 2023Updated 2 years ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆61Updated this week
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- ☆235Jun 11, 2024Updated last year
- Code for "Thinking Forward: Memory-Efficient Federated Finetuning of Language Models" (NeurIPS 2024). Spry is a federated learning al…☆12Oct 8, 2024Updated last year
- ☆25Dec 11, 2021Updated 4 years ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆34Aug 14, 2024Updated last year
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆30Nov 22, 2025Updated 4 months ago
- Implementation of Microscaling data formats in SystemVerilog.☆30Jul 6, 2025Updated 8 months ago
- A multipurpose Discord bot utilizing a variety of AI models☆14May 21, 2022Updated 3 years ago
- ☆30Jul 22, 2024Updated last year
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆76Jul 14, 2025Updated 8 months ago
- ☆53Jul 18, 2024Updated last year
- A repository of projects and datasets under active development by Alignment Lab AI☆22Dec 22, 2023Updated 2 years ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆69Mar 7, 2024Updated 2 years ago
- Code and data for AAAI 2022 paper "Multilingual Code Snippets Training for Program Translation"☆10Mar 7, 2022Updated 4 years ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆155Aug 21, 2025Updated 7 months ago
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- [UNDER CONSTRUCTION]unofficial implementation of ABC-Net☆12Feb 5, 2018Updated 8 years ago
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Dec 6, 2023Updated 2 years ago
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- ☆35Dec 22, 2025Updated 3 months ago