arrmansa / Basic-UI-for-GPT-J-6B-with-low-vram
A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.
☆115Updated 3 years ago
Alternatives and similar repositories for Basic-UI-for-GPT-J-6B-with-low-vram
Users that are interested in Basic-UI-for-GPT-J-6B-with-low-vram are comparing it to the libraries listed below
Sorting:
- Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7B☆63Updated 3 years ago
- A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)☆36Updated 3 years ago
- Just a repo with some AI Dungeon scripts☆29Updated 3 years ago
- A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.☆311Updated last year
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- A ready-to-deploy container for implementing an easy to use REST API to access Language Models.☆64Updated 2 years ago
- A latent text-to-image diffusion model☆67Updated 2 years ago
- ☆157Updated last year
- Tools with GUI for GPT finetune data preparation☆23Updated 3 years ago
- Conversational Language model toolkit for training against human preferences.☆42Updated last year
- Framework agnostic python runtime for RWKV models☆146Updated last year
- rwkv_chatbot☆62Updated 2 years ago
- A one-click version of sd-webui-colab☆161Updated 2 years ago
- Discord bot and Interface for Stable Diffusion☆280Updated 2 years ago
- NovelAI Research Tool and API implementations in Golang☆43Updated 3 years ago
- A notebook that runs GPT-Neo with low vram (6 gb) and cuda acceleration by loading it into gpu memory in smaller parts.☆14Updated 3 years ago
- C/C++ implementation of PygmalionAI/pygmalion-6b☆56Updated 2 years ago
- extending stable diffusion prompts with suitable style cues using text generation☆176Updated 2 years ago
- Custom scripts for the stable diffusion web ui by AUTOMATIC1111☆141Updated 2 years ago
- UI interface for experimenting with multimodal (text, image) models (stable diffusion).☆367Updated last year
- ☆242Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- Adapdet for google colab☆100Updated 2 years ago
- An attempt to create an open-source AI companion that is self-hostable☆81Updated 2 years ago
- 4 bits quantization of LLMs using GPTQ☆49Updated last year
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆28Updated 2 years ago
- High-Resolution Image Synthesis with Latent Diffusion Models☆76Updated 2 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 2 years ago
- Inference code for LLaMA models☆188Updated 2 years ago
- Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion (tweaks focused on training faces)☆143Updated 2 years ago