abacaj / fine-tune-mistral
Fine-tune mistral-7B on 3090s, a100s, h100s
☆709Updated last year
Alternatives and similar repositories for fine-tune-mistral:
Users that are interested in fine-tune-mistral are comparing it to the libraries listed below
- Customizable implementation of the self-instruct paper.☆1,040Updated last year
- ☆412Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆498Updated last year
- Tune any FALCON in 4-bit☆466Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆692Updated 11 months ago
- ☆517Updated 7 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,376Updated 11 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆720Updated 10 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆575Updated 9 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,455Updated 11 months ago
- Guide for fine-tuning Llama/Mistral/CodeLlama models and more