thisserand / alpaca-lora-finetune-languageLinks
☆123Updated 2 years ago
Alternatives and similar repositories for alpaca-lora-finetune-language
Users that are interested in alpaca-lora-finetune-language are comparing it to the libraries listed below
Sorting:
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Fine-tuning LLMs using QLoRA☆266Updated last year
- Tune any FALCON in 4-bit☆464Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 6 months ago
- Repo for fine-tuning Casual LLMs☆457Updated last year
- ☆64Updated 2 years ago
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆356Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- ☆128Updated 2 years ago
- Reimplementation of the task generation part from the Alpaca paper☆118Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning☆309Updated last year
- User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.☆339Updated 2 years ago
- ☆167Updated 2 years ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- Small finetuned LLMs for a diverse set of useful tasks☆127Updated 2 years ago
- 📚 Datasets and models for instruction-tuning☆237Updated 2 years ago
- ☆275Updated 2 years ago
- A command-line interface to generate textual and conversational datasets with LLMs.☆298Updated 2 years ago
- Fact-checking LLM outputs with self-ask☆304Updated 2 years ago
- Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)☆74Updated 3 years ago
- Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)☆38Updated 2 years ago
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆81Updated last year
- Fine-tune and quantize Llama-2-like models to generate Python code using QLoRA, Axolot,..☆64Updated last year
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆182Updated 2 months ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated 2 years ago
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆245Updated last year
- Chat with your data privately using MPT-30b☆183Updated 2 years ago