PygmalionAI / training-codeLinks
The code we currently use to fine-tune models.
☆115Updated last year
Alternatives and similar repositories for training-code
Users that are interested in training-code are comparing it to the libraries listed below
Sorting:
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- A Simple Discord Bot for the Alpaca LLM☆101Updated 2 years ago
- ☆74Updated last year
- A prompt/context management system☆170Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆110Updated 2 years ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆162Updated last year
- Command-line script for inferencing from models such as falcon-7b-instruct☆75Updated 2 years ago
- ☆161Updated 2 weeks ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated 11 months ago
- A discord bot that roleplays!☆150Updated last year
- Merge Transformers language models by use of gradient parameters.☆207Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- Harnessing the Memory Power of the Camelids☆146Updated last year
- GPT-2 small trained on phi-like data☆67Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆145Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- ☆135Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated last year
- Small finetuned LLMs for a diverse set of useful tasks☆128Updated 2 years ago
- Run inference on replit-3B code instruct model using CPU☆158Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Scripts to create your own moe models using mlx☆90Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆175Updated last year
- Drop in replacement for OpenAI, but with Open models.☆152Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- ☆199Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15B☆334Updated 10 months ago
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆75Updated 2 years ago