PygmalionAI / data-toolboxLinks
Our data munging code.
☆34Updated 8 months ago
Alternatives and similar repositories for data-toolbox
Users that are interested in data-toolbox are comparing it to the libraries listed below
Sorting:
- Conversational Language model toolkit for training against human preferences.☆42Updated last year
- ☆27Updated last year
- A discord bot that roleplays!☆149Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆104Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆32Updated last year
- Image Diffusion block merging technique applied to transformers based Language Models.☆53Updated 2 years ago
- The code we currently use to fine-tune models.☆113Updated last year
- Dynamic parameter modulation for oobabooga's text-generation-webui that adjusts generation parameters to better mirror user affect.☆34Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- Science-driven chatbot development☆57Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆50Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆51Updated last year
- 4 bits quantization of LLMs using GPTQ☆49Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆122Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Where we keep our notes about model training runs.☆16Updated 2 years ago
- Merge Transformers language models by use of gradient parameters.☆207Updated 9 months ago
- BlinkDL's RWKV-v4 running in the browser☆46Updated 2 years ago
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆30Updated 2 years ago
- C/C++ implementation of PygmalionAI/pygmalion-6b☆55Updated 2 years ago
- ☆12Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated 2 years ago
- Harnessing the Memory Power of the Camelids☆146Updated last year
- Instruct-tune LLaMA on consumer hardware☆74Updated 2 years ago
- ToK aka Tree of Knowledge for Large Language Models LLM. It's a novel dataset that inspires knowledge symbolic correlation in simple inpu…☆54Updated last year
- ☆72Updated last year
- Experimental sampler to make LLMs more creative☆31Updated last year
- Train Large Language Models (LLM) using LoRA☆25Updated 2 years ago
- Text WebUI extension to add clever Notebooks to Chat mode☆139Updated last year
- 🎮👥 Experience the future of multiplayer gaming with MUDGPT's AI-generated virtual world! 🌟🤖☆43Updated 2 years ago