PygmalionAI / data-toolbox
Our data munging code.
☆34Updated 6 months ago
Alternatives and similar repositories for data-toolbox:
Users that are interested in data-toolbox are comparing it to the libraries listed below
- Conversational Language model toolkit for training against human preferences.☆42Updated last year
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated last year
- ☆27Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated 11 months ago
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆51Updated last year
- rwkv_chatbot☆62Updated 2 years ago
- Where we keep our notes about model training runs.☆16Updated 2 years ago
- Text WebUI extension to add clever Notebooks to Chat mode☆139Updated last year
- The code we currently use to fine-tune models.☆114Updated 11 months ago
- Gradio UI for RWKV LLM☆29Updated 2 years ago
- Experimental sampler to make LLMs more creative☆30Updated last year
- Dynamic parameter modulation for oobabooga's text-generation-webui that adjusts generation parameters to better mirror user affect.☆35Updated last year
- C/C++ implementation of PygmalionAI/pygmalion-6b☆56Updated last year
- SillyTavern MultiPlayer is an LLM chat interface, created by RossAscends, that allows multiple users to chat together with each other and…☆81Updated 8 months ago
- 4 bits quantization of LLaMa using GPTQ☆130Updated last year
- A discord bot that roleplays!☆148Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated last year
- 🎮👥 Experience the future of multiplayer gaming with MUDGPT's AI-generated virtual world! 🌟🤖☆43Updated last year
- 4 bits quantization of LLMs using GPTQ☆49Updated last year
- mikugg is a Frontend for "Generative Visual Novels"☆145Updated 3 weeks ago
- Train Llama Loras Easily☆31Updated last year
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- ☆73Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- ToK aka Tree of Knowledge for Large Language Models LLM. It's a novel dataset that inspires knowledge symbolic correlation in simple inpu…☆51Updated last year
- ☆12Updated last year
- oobabooga extension - Experimental sampler to make LLMs more creative☆23Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year