mallorbc / Finetune_LLMsLinks
Repo for fine-tuning Casual LLMs
☆457Updated last year
Alternatives and similar repositories for Finetune_LLMs
Users that are interested in Finetune_LLMs are comparing it to the libraries listed below
Sorting:
- Tune any FALCON in 4-bit☆466Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆436Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆353Updated 2 years ago
- User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.☆339Updated 2 years ago
- ☆123Updated 2 years ago
- ☆460Updated last year
- UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT…☆474Updated 2 years ago
- ☆535Updated last year
- simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.☆397Updated 2 years ago
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.☆472Updated last year
- Ask Me Anything language model prompting☆546Updated 2 years ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆821Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- Crosslingual Generalization through Multitask Finetuning☆535Updated 11 months ago
- The prime repository for state-of-the-art Multilingual Question Answering research and development.☆736Updated 7 months ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Alpaca dataset from Stanford, cleaned and curated☆1,567Updated 2 years ago
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆177Updated 3 weeks ago
- Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning☆312Updated 10 months ago
- Customizable implementation of the self-instruct paper.☆1,050Updated last year
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆302Updated 2 years ago
- Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)☆74Updated 3 years ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆599Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.☆311Updated 2 years ago
- ☆131Updated 3 years ago
- ☆444Updated 2 years ago
- 🥤🧑🏻🚀Code and dataset for our EMNLP 2023 paper - "SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization…☆232Updated last year
- Fine-tuning LLMs using QLoRA☆262Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 3 months ago