arrmansa / Basic-UI-for-GPT-Neo-with-low-vram
A basic ui for running gpt neo 2.7B on low vram (3 gb Vram minimum)
☆36Updated 3 years ago
Alternatives and similar repositories for Basic-UI-for-GPT-Neo-with-low-vram:
Users that are interested in Basic-UI-for-GPT-Neo-with-low-vram are comparing it to the libraries listed below
- A notebook that runs GPT-Neo with low vram (6 gb) and cuda acceleration by loading it into gpu memory in smaller parts.☆14Updated 3 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆115Updated 3 years ago
- ☆28Updated last year
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆28Updated 2 years ago
- A ready-to-deploy container for implementing an easy to use REST API to access Language Models.☆64Updated 2 years ago
- Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7B☆64Updated 3 years ago
- Fork of kingoflolz/mesh-transformer-jax with memory usage optimizations and support for GPT-Neo, GPT-NeoX, BLOOM, OPT and fairseq dense L…☆22Updated 2 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 3 years ago
- Conversational Language model toolkit for training against human preferences.☆42Updated last year
- Framework agnostic python runtime for RWKV models☆146Updated last year
- Platform and API Agnostic library for powering chatbots☆24Updated 2 years ago
- Doohickey is a stable diffusion tool for technical artists who want to stay up-to-date with the latest developments in the field.☆39Updated 2 years ago
- Tools with GUI for GPT finetune data preparation☆23Updated 3 years ago
- ☆9Updated 3 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 2 years ago
- ☆27Updated 3 years ago
- ☆27Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 2 years ago
- k_diffusion wrapper included for k_lms sampling. fixed for notebook.☆20Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆112Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- ☆32Updated 2 months ago
- Discord AI Generation Bot to collect an aesthetic rating dataset☆60Updated 2 years ago
- ☆130Updated 2 years ago
- 4 bits quantization of LLMs using GPTQ☆49Updated last year
- The first AI artist☆32Updated 2 years ago
- ☆62Updated 2 years ago
- Where we keep our notes about model training runs.☆16Updated 2 years ago