sleekmike / Finetune_GPT-J_6B_8-bitLinks
Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)
☆74Updated 3 years ago
Alternatives and similar repositories for Finetune_GPT-J_6B_8-bit
Users that are interested in Finetune_GPT-J_6B_8-bit are comparing it to the libraries listed below
Sorting:
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 2 years ago
- ☆122Updated 2 years ago
- ☆46Updated 2 years ago
- Repo for fine-tuning Casual LLMs☆457Updated last year
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆66Updated 2 years ago
- ☆50Updated 2 years ago
- ☆130Updated 3 years ago
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆114Updated 2 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- Generate NFT or train new model in just few clicks! Train as much as you can, others will resume from checkpoint!☆153Updated 3 years ago
- Conversational AI tooling & personas built on Cohere's LLMs☆174Updated last year
- A collection of simple transformer based chatbots.☆18Updated 2 years ago
- ☆27Updated 3 years ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- ☆64Updated 2 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆309Updated 2 years ago
- Training & Implementation of chatbots leveraging GPT-like architecture with the aitextgen package to enable dynamic conversations.☆48Updated 2 years ago
- BIG: Back In the Game of Creative AI☆27Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- A series of notebooks demonstrating how to build simple NLP web apps with Gradio and Hugging Face transformers☆45Updated 3 years ago
- Smol but mighty language model☆62Updated 2 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆114Updated 3 years ago
- ☆171Updated 5 months ago
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆28Updated 2 years ago
- ☆23Updated 2 years ago
- [WIP] A 🔥 interface for running code in the cloud☆85Updated 2 years ago
- llama-4bit-colab☆64Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated last month
- Chat with your data privately using MPT-30b☆181Updated 2 years ago
- ☆63Updated 3 years ago