thisserand / alpaca-lora-finetune-languageLinks
☆122Updated 2 years ago
Alternatives and similar repositories for alpaca-lora-finetune-language
Users that are interested in alpaca-lora-finetune-language are comparing it to the libraries listed below
Sorting:
- Repo for fine-tuning Casual LLMs☆457Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Tune any FALCON in 4-bit☆465Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- ☆64Updated 2 years ago
- Fine-tuning LLMs using QLoRA☆261Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 2 months ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated 2 years ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆302Updated 2 years ago
- Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning☆312Updated 9 months ago
- llama-4bit-colab☆64Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- A joint community effort to create one central leaderboard for LLMs.☆304Updated 11 months ago
- User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.☆339Updated 2 years ago
- 📚 Datasets and models for instruction-tuning☆238Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆162Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.☆149Updated last year
- Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)☆74Updated 3 years ago
- ☆168Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- A command-line interface to generate textual and conversational datasets with LLMs.☆301Updated last year
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆81Updated last year
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆177Updated this week
- Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)☆38Updated 2 years ago
- ☆128Updated 2 years ago
- ☆275Updated 2 years ago
- Small finetuned LLMs for a diverse set of useful tasks☆127Updated 2 years ago
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆240Updated last year