sleekmike / Finetune_GPT-J_6B_8-bitLinks
Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)
☆74Updated 3 years ago
Alternatives and similar repositories for Finetune_GPT-J_6B_8-bit
Users that are interested in Finetune_GPT-J_6B_8-bit are comparing it to the libraries listed below
Sorting:
- Repo for fine-tuning Casual LLMs☆458Updated last year
- ☆123Updated 2 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 3 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆308Updated 2 years ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆115Updated 2 years ago
- Fact-checking LLM outputs with self-ask☆305Updated 2 years ago
- ☆131Updated 3 years ago
- 📚 Datasets and models for instruction-tuning☆238Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- ☆46Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 7 months ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆435Updated 2 years ago
- Training & Implementation of chatbots leveraging GPT-like architecture with the aitextgen package to enable dynamic conversations.☆49Updated 3 years ago
- ☆34Updated 2 years ago
- A dataset featuring diverse dialogues between two ChatGPT (gpt-3.5-turbo) instances with system messages written by GPT-4. Covering vario…☆164Updated 2 years ago
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆68Updated 3 years ago
- Chat with your data privately using MPT-30b☆184Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Conversational AI tooling & personas built on Cohere's LLMs☆174Updated 2 years ago
- Smol but mighty language model☆63Updated 2 years ago
- Tune any FALCON in 4-bit☆465Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆49Updated 2 years ago
- Generate NFT or train new model in just few clicks! Train as much as you can, others will resume from checkpoint!☆157Updated 3 years ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- ☆64Updated 2 years ago
- ☆172Updated 10 months ago
- Harnessing the Memory Power of the Camelids☆147Updated 2 years ago
- Provide a way to use the GPT-QLLama model as an API☆44Updated 2 years ago
- TextReducer - A Tool for Summarization and Information Extraction☆85Updated last year