qwopqwop200 / gptqloraView external linksLinks
GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ
☆101May 30, 2023Updated 2 years ago
Alternatives and similar repositories for gptqlora
Users that are interested in gptqlora are comparing it to the libraries listed below
Sorting:
- QLoRA with Enhanced Multi GPU Support☆38Aug 8, 2023Updated 2 years ago
- Official PyTorch implementation of QA-LoRA☆145Mar 13, 2024Updated last year
- ☆535Dec 1, 2023Updated 2 years ago
- Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models☆25Sep 27, 2023Updated 2 years ago
- Implementation of MixCE method described in ACL 2023 paper by Zhang et al.☆20May 29, 2023Updated 2 years ago
- This project showcases engaging interactions between two AI chatbots.☆10Jan 10, 2024Updated 2 years ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆85Mar 5, 2024Updated last year
- ☆235Jun 11, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 2 months ago
- ☆157Jun 22, 2023Updated 2 years ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆39Nov 1, 2024Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆32Sep 22, 2024Updated last year
- ☆33Mar 28, 2025Updated 10 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆887Nov 26, 2025Updated 2 months ago
- ☆52Jul 18, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆11Jun 1, 2023Updated 2 years ago
- ☆13May 21, 2023Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago
- Low-bit optimizers for PyTorch☆138Oct 9, 2023Updated 2 years ago
- PB-LLM: Partially Binarized Large Language Models☆156Nov 20, 2023Updated 2 years ago
- ☆16Oct 19, 2022Updated 3 years ago
- [ICML2025] LoRA fine-tune directly on the quantized models.☆39Nov 25, 2024Updated last year
- Collaborative inference of latent diffusion via hivemind☆12May 29, 2023Updated 2 years ago
- Quantization in the Jagged Loss Landscape of Vision Transformers☆13Oct 22, 2023Updated 2 years ago
- ☆12Apr 27, 2024Updated last year
- ☆13Sep 6, 2022Updated 3 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆123Mar 5, 2023Updated 2 years ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Oct 3, 2024Updated last year
- ☆63Sep 23, 2024Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,073Jul 13, 2024Updated last year
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆108May 18, 2025Updated 8 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆455Jan 16, 2025Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,026Apr 11, 2025Updated 10 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- Enhancement in Multimodal Representation Learning.☆41Mar 15, 2024Updated last year
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated last year
- ☆18Mar 18, 2024Updated last year