sleekmike / Finetune_GPT-J_6B_8-bitLinks
Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)
☆74Updated 2 years ago
Alternatives and similar repositories for Finetune_GPT-J_6B_8-bit
Users that are interested in Finetune_GPT-J_6B_8-bit are comparing it to the libraries listed below
Sorting:
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆66Updated 2 years ago
- Repo for fine-tuning Casual LLMs☆456Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated last year
- ☆130Updated 2 years ago
- ☆83Updated last year
- ☆27Updated 3 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 2 years ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- ☆35Updated 3 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆28Updated 2 years ago
- ☆64Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆46Updated last year
- [WIP] A 🔥 interface for running code in the cloud☆85Updated 2 years ago
- ☆50Updated 2 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆308Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated last year
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆115Updated 3 years ago
- Framework agnostic python runtime for RWKV models☆146Updated last year
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆28Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Drop in replacement for OpenAI, but with Open models.☆152Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 2 weeks ago
- BIG: Back In the Game of Creative AI☆27Updated 2 years ago
- ☆61Updated 2 years ago
- Training & Implementation of chatbots leveraging GPT-like architecture with the aitextgen package to enable dynamic conversations.☆49Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago