gustavecortal / gpt-j-fine-tuning-example
Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
☆65Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for gpt-j-fine-tuning-example
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Multi-Domain Expert Learning☆67Updated 9 months ago
- Prompt tuning toolkit for GPT-2 and GPT-Neo☆90Updated 3 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆55Updated 2 years ago
- ☆24Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 7 months ago
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆27Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆111Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆101Updated 3 months ago
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- One stop shop for all things carp☆59Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆111Updated 2 months ago
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆80Updated 11 months ago
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆112Updated last year
- ☆37Updated last year
- ☆128Updated 2 years ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- HuggingChat like UI in Gradio☆64Updated last year
- Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)☆74Updated 2 years ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated last year
- Tune MPTs☆84Updated last year
- distill chatGPT coding ability into small model (1b)☆24Updated last year
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆66Updated 2 years ago
- ☆32Updated last year
- Tune LLaMa-7B on Alpaca Dataset using PEFT / LORA Based on @zphang's https://github.com/zphang/minimal-llama scripts.☆23Updated last year