thisserand / alpaca-lora-finetune-languageLinks
☆122Updated last year
Alternatives and similar repositories for alpaca-lora-finetune-language
Users that are interested in alpaca-lora-finetune-language are comparing it to the libraries listed below
Sorting:
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆81Updated last year
- ☆64Updated 2 years ago
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆185Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Tune any FALCON in 4-bit☆467Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 2 weeks ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- ☆166Updated 2 years ago
- ☆128Updated 2 years ago
- Python bindings for llama.cpp☆65Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆167Updated last year
- Fine-tuning LLMs using QLoRA☆255Updated 11 months ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- Finetune BLOOM☆40Updated 2 years ago
- Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)☆38Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Merge Transformers language models by use of gradient parameters.☆207Updated 9 months ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated last year
- llama-4bit-colab☆64Updated 2 years ago
- Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning☆307Updated 7 months ago
- User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.☆339Updated 2 years ago
- Repo for fine-tuning Casual LLMs☆456Updated last year
- ☆534Updated last year
- ☆276Updated 2 years ago
- LoRA weights for Cerebras-GPT-2.7b finetuned on Alpaca dataset with shorter prompt☆63Updated 2 years ago
- HuggingChat like UI in Gradio☆70Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆14Updated last year
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆171Updated last month