georgesung / llm_qlora
Fine-tuning LLMs using QLoRA
☆252Updated 11 months ago
Alternatives and similar repositories for llm_qlora
Users that are interested in llm_qlora are comparing it to the libraries listed below
Sorting:
- A bagel, with everything.☆320Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 6 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆238Updated 11 months ago
- Merge Transformers language models by use of gradient parameters.☆208Updated 9 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆711Updated last year
- Tune any FALCON in 4-bit☆467Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆698Updated last year
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆81Updated last year
- ☆168Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆162Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Automatically evaluate your LLMs in Google Colab☆625Updated last year
- Local LLM ReAct Agent with Guidance☆158Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆109Updated 7 months ago
- Domain Adapted Language Modeling Toolkit - E2E RAG☆320Updated 6 months ago
- ☆535Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- ☆95Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- batched loras☆342Updated last year
- ☆92Updated last year
- Tune MPTs☆84Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆186Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆76Updated 6 months ago
- Scripts for fine-tuning Llama2 via SFT and DPO.☆200Updated last year
- The code we currently use to fine-tune models.☆114Updated last year