kuleshov-group / llmtoolsView external linksLinks
Finetuning Large Language Models on One Consumer GPU in 2 Bits
☆734May 25, 2024Updated last year
Alternatives and similar repositories for llmtools
Users that are interested in llmtools are comparing it to the libraries listed below
Sorting:
- Evaluation Code repository for the paper "ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers". (2023…☆13Dec 5, 2023Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,835Jun 10, 2024Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,074Jul 13, 2024Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,028Apr 11, 2025Updated 10 months ago
- MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs☆950May 15, 2023Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,911Sep 30, 2023Updated 2 years ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,087Jul 1, 2025Updated 7 months ago
- Instruct-tune LLaMA on consumer hardware☆18,978Jul 29, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆7,939Jan 22, 2026Updated 3 weeks ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆396Feb 24, 2024Updated last year
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,143Sep 18, 2025Updated 4 months ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,384Oct 28, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,477Jun 7, 2025Updated 8 months ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,619Updated this week
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆823May 6, 2023Updated 2 years ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,254Mar 27, 2024Updated last year
- LLM as a Chatbot Service☆3,332Nov 20, 2023Updated 2 years ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,530Jul 16, 2023Updated 2 years ago
- Large Language Model Text Generation Inference☆10,757Jan 8, 2026Updated last month
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,705Jun 25, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,657Mar 8, 2024Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,514Aug 13, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,188Jul 11, 2024Updated last year
- Tune any FALCON in 4-bit☆463Sep 1, 2023Updated 2 years ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,143Jan 11, 2024Updated 2 years ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,436Jul 17, 2025Updated 6 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,284Dec 22, 2025Updated last month
- Universal LLM Deployment Engine with ML Compilation☆22,012Updated this week
- Cramming the training of a (BERT-type) language model into limited compute.☆1,361Jun 13, 2024Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,066Mar 7, 2024Updated last year
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆381Jun 4, 2024Updated last year
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,498Jan 28, 2026Updated 2 weeks ago
- Robust recipes to align language models with human and AI preferences☆5,495Sep 8, 2025Updated 5 months ago
- Instruction Tuning with GPT-4☆4,340Jun 11, 2023Updated 2 years ago
- Let ChatGPT teach your own chatbot in hours with a single GPU!☆3,167Mar 17, 2024Updated last year
- An open-source implementation of Google's PaLM models☆820Jun 21, 2024Updated last year
- Tools for merging pretrained large language models.☆6,783Jan 26, 2026Updated 2 weeks ago