harubaru / convogpt
Conversational Language model toolkit for training against human preferences.
☆42Updated last year
Alternatives and similar repositories for convogpt:
Users that are interested in convogpt are comparing it to the libraries listed below
- Platform and API Agnostic library for powering chatbots☆24Updated 2 years ago
- Our data munging code.☆34Updated 7 months ago
- Where we keep our notes about model training runs.☆16Updated 2 years ago
- A ready-to-deploy container for implementing an easy to use REST API to access Language Models.☆64Updated 2 years ago
- Doohickey is a stable diffusion tool for technical artists who want to stay up-to-date with the latest developments in the field.☆39Updated 2 years ago
- ☆27Updated last year
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 3 years ago
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated last year
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆28Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆20Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- 4 bits quantization of LLMs using GPTQ☆49Updated last year
- Simple extension for text-generation-webui that injects recent conversation history into the negative prompt with the goal of minimizing …☆33Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- Experimental sampler to make LLMs more creative☆31Updated last year
- Alpaca Lora☆26Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- ☆27Updated 2 years ago
- Train Llama Loras Easily☆31Updated last year
- ☆32Updated 2 years ago
- Inference code for LLaMA models with Gradio Interface and rolling generation like ChatGPT☆48Updated 2 years ago
- A simple extension that uses Bark Text-to-Speech for audio output☆35Updated last year
- A discord bot that roleplays!☆148Updated last year
- C/C++ implementation of PygmalionAI/pygmalion-6b☆56Updated 2 years ago
- Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7B☆64Updated 3 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- k_diffusion wrapper included for k_lms sampling. fixed for notebook.☆20Updated last year