georgesung / llm_qlora
Fine-tuning LLMs using QLoRA
☆250Updated 9 months ago
Alternatives and similar repositories for llm_qlora:
Users that are interested in llm_qlora are comparing it to the libraries listed below
- Merge Transformers language models by use of gradient parameters.☆205Updated 7 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆233Updated 10 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- ☆168Updated last year
- A bagel, with everything.☆317Updated 11 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 11 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 4 months ago
- Tune any FALCON in 4-bit☆466Updated last year
- TheBloke's Dockerfiles☆305Updated last year
- ☆152Updated 8 months ago
- Customizable implementation of the self-instruct paper.☆1,040Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆158Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆195Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- ☆122Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆498Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models