mallorbc / GPT_Neo_quotes_datasetLinks
☆27Updated 3 years ago
Alternatives and similar repositories for GPT_Neo_quotes_dataset
Users that are interested in GPT_Neo_quotes_dataset are comparing it to the libraries listed below
Sorting:
- ☆34Updated 3 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- ☆122Updated 2 years ago
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆28Updated 2 years ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)☆36Updated 4 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 2 years ago
- ☆63Updated 3 years ago
- Training & Implementation of chatbots leveraging GPT-like architecture with the aitextgen package to enable dynamic conversations.☆48Updated 2 years ago
- ☆50Updated 2 years ago
- ☆64Updated 2 years ago
- Smol but mighty language model☆62Updated 2 years ago
- ☆29Updated last year
- ☆130Updated 3 years ago
- ☆9Updated 4 years ago
- Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)☆74Updated 3 years ago
- ☆21Updated 4 years ago
- Reweight GPT - a simple neural network using transformer architecture for next character prediction☆57Updated last year
- Notebook for running GPT neo models based on GPT3☆62Updated 3 years ago
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆66Updated 2 years ago
- Just a repo with some AI Dungeon scripts☆30Updated 4 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆114Updated 3 years ago
- Drop in replacement for OpenAI, but with Open models.☆152Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Code for OpenAI Whisper Web App Demo☆93Updated 2 years ago
- Conversational Language model toolkit for training against human preferences.☆41Updated last year
- ☆23Updated 2 years ago
- llama-4bit-colab☆64Updated 2 years ago
- Implementation of PersonaGPT Dialog Model☆113Updated 3 years ago
- Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7B☆61Updated 4 years ago