Xirider / finetune-gpt2xl
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
☆437Updated last year
Alternatives and similar repositories for finetune-gpt2xl:
Users that are interested in finetune-gpt2xl are comparing it to the libraries listed below
- Repo for fine-tuning Casual LLMs☆453Updated last year
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆464Updated 2 years ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆862Updated last year
- ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.☆579Updated last year
- A search engine for ParlAI's BlenderBot project (and probably other ones as well)☆131Updated 3 years ago
- Crosslingual Generalization through Multitask Finetuning☆530Updated 6 months ago
- Prompt tuning toolkit for GPT-2 and GPT-Neo☆88Updated 3 years ago
- ☆111Updated 2 years ago
- An open collection of implementation tips, tricks and resources for training large language models☆472Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆351Updated last year
- ☆182Updated last year
- Fast Inference Solutions for BLOOM☆561Updated 6 months ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆308Updated 2 years ago
- Tune any FALCON in 4-bit☆466Updated last year
- ☆130Updated 2 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆785Updated last year
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆531Updated last year
- ☆318Updated 3 years ago
- Fine-tuning GPT-2 Small for Question Answering☆130Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆188Updated 3 years ago
- ☆1,555Updated last year
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆225Updated last year
- Expanding natural instructions☆991Updated last year
- ☆459Updated last year
- UnifiedQA: Crossing Format Boundaries With a Single QA System☆432Updated 2 years ago
- Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)☆74Updated 2 years ago
- ☆251Updated 2 years ago
- simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.☆394Updated last year
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆154Updated last year