sleekmike / Finetune_GPT-J_6B_8-bit
Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)
☆74Updated 2 years ago
Alternatives and similar repositories for Finetune_GPT-J_6B_8-bit:
Users that are interested in Finetune_GPT-J_6B_8-bit are comparing it to the libraries listed below
- ☆128Updated 2 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 2 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated last year
- Repo for fine-tuning Casual LLMs☆454Updated 10 months ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- ☆34Updated 3 years ago
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆66Updated 2 years ago
- ☆27Updated 3 years ago
- [WIP] A 🔥 interface for running code in the cloud☆86Updated last year
- ☆28Updated last year
- ☆84Updated last year
- ☆121Updated last year
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆114Updated last year
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆115Updated 3 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆306Updated last year
- Conversational Language model toolkit for training against human preferences.☆40Updated 9 months ago
- Experiments with generating opensource language model assistants☆97Updated last year
- Generate NFT or train new model in just few clicks! Train as much as you can, others will resume from checkpoint!☆149Updated 2 years ago
- ☆168Updated last year
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆74Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- ☆50Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- llama-4bit-colab☆65Updated last year
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)☆36Updated 3 years ago
- ☆92Updated last year