PygmalionAI / data-toolbox
Our data munging code.
☆34Updated 4 months ago
Alternatives and similar repositories for data-toolbox:
Users that are interested in data-toolbox are comparing it to the libraries listed below
- Conversational Language model toolkit for training against human preferences.☆41Updated 10 months ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated last year
- ☆27Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆104Updated 9 months ago
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- Where we keep our notes about model training runs.☆16Updated last year
- Text WebUI extension to add clever Notebooks to Chat mode☆139Updated last year
- 4 bits quantization of LLMs using GPTQ☆47Updated last year
- Gradio UI for RWKV LLM☆28Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 10 months ago
- rwkv_chatbot☆62Updated 2 years ago
- The official front-end UI.☆40Updated last year
- ☆92Updated 4 months ago
- Finetune any model on HF in less than 30 seconds☆58Updated 2 weeks ago
- Experimental sampler to make LLMs more creative☆30Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- The code we currently use to fine-tune models.☆113Updated 9 months ago
- RAG implementation for Ooba characters. dynamically spins up new qdrant vector DB and manages retrieval and commits for conversations ba…☆46Updated last year
- 📖 — Notebooks related to RWKV☆59Updated last year
- Dynamic parameter modulation for oobabooga's text-generation-webui that adjusts generation parameters to better mirror user affect.☆34Updated last year
- C/C++ implementation of PygmalionAI/pygmalion-6b☆55Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated last year
- My implementation of "Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models"☆96Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- BlinkDL's RWKV-v4 running in the browser☆47Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆50Updated last year
- Instruct-tune LLaMA on consumer hardware☆72Updated last year
- SillyTavern MultiPlayer is an LLM chat interface, created by RossAscends, that allows multiple users to chat together with each other and…☆78Updated 6 months ago