GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ
☆101May 30, 2023Updated 2 years ago
Alternatives and similar repositories for gptqlora
Users that are interested in gptqlora are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official PyTorch implementation of QA-LoRA☆146Mar 13, 2024Updated 2 years ago
- QLoRA with Enhanced Multi GPU Support☆38Aug 8, 2023Updated 2 years ago
- ☆536Dec 1, 2023Updated 2 years ago
- ☆18Mar 18, 2024Updated 2 years ago
- Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models☆25Sep 27, 2023Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆171Nov 26, 2025Updated 5 months ago
- ☆157Jun 22, 2023Updated 2 years ago
- A vllm proxy server to add security and multi model management for vllm servers☆12May 30, 2024Updated last year
- ☆235Jun 11, 2024Updated last year
- Implementation of MixCE method described in ACL 2023 paper by Zhang et al.☆20May 29, 2023Updated 2 years ago
- ☆53Jul 18, 2024Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆896Nov 26, 2025Updated 5 months ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆32Sep 22, 2024Updated last year
- PB-LLM: Partially Binarized Large Language Models☆155Nov 20, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆718Aug 13, 2024Updated last year
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆181Apr 24, 2026Updated 2 weeks ago
- ☆34Mar 28, 2025Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆39Nov 1, 2024Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆86Mar 5, 2024Updated 2 years ago
- 4 bits quantization of LLaMA using GPTQ☆3,072Jul 13, 2024Updated last year
- ☆13Apr 27, 2024Updated 2 years ago
- Load any clip model with a standardized interface☆22Oct 20, 2025Updated 6 months ago
- Low-bit optimizers for PyTorch☆138Oct 9, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,059Apr 11, 2025Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- ☆63Sep 23, 2024Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆462Jan 16, 2025Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆208Aug 10, 2024Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆51Oct 21, 2023Updated 2 years ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Apr 28, 2023Updated 3 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,915Sep 30, 2023Updated 2 years ago
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆339Apr 10, 2026Updated last month
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆123Mar 5, 2023Updated 3 years ago
- ☆25Oct 31, 2024Updated last year
- SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization☆11Aug 12, 2020Updated 5 years ago
- [COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.c…☆30Mar 5, 2025Updated last year
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- Collaborative inference of latent diffusion via hivemind☆12May 29, 2023Updated 2 years ago