PygmalionAI / training-code
The code we currently use to fine-tune models.
☆114Updated 11 months ago
Alternatives and similar repositories for training-code:
Users that are interested in training-code are comparing it to the libraries listed below
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- A Simple Discord Bot for the Alpaca LLM☆101Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆112Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- Merge Transformers language models by use of gradient parameters.☆208Updated 8 months ago
- A discord bot that roleplays!☆148Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆223Updated last year
- ☆153Updated 9 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆237Updated 11 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆37Updated last year
- The one who calls upon functions - Function-Calling Language Model☆36Updated last year
- Harnessing the Memory Power of the Camelids☆146Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Let's create synthetic textbooks together :)☆74Updated last year
- ☆73Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- Command-line script for inferencing from models such as falcon-7b-instruct☆76Updated last year
- Scripts to create your own moe models using mlx☆89Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated 2 years ago
- Drop in replacement for OpenAI, but with Open models.☆152Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- Small finetuned LLMs for a diverse set of useful tasks☆126Updated last year