Locutusque / TinyMistral-train-eval
The training notebooks that were similar to the original script used to train TinyMistral.
☆21Updated last year
Alternatives and similar repositories for TinyMistral-train-eval:
Users that are interested in TinyMistral-train-eval are comparing it to the libraries listed below
- Genertaes control vectors for use with llama.cpp in GGUF format.☆19Updated last week
- An unsupervised model merging algorithm for Transformers-based language models.☆104Updated 11 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 10 months ago
- Video+code lecture on building nanoGPT from scratch☆66Updated 9 months ago
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 10 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 11 months ago
- ☆126Updated 7 months ago
- ☆66Updated 10 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆55Updated this week
- Train your own small bitnet model☆65Updated 5 months ago
- 1.58-bit LLaMa model☆82Updated 11 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆138Updated last month
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆99Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆150Updated last year
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 2 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆195Updated 8 months ago
- ☆53Updated 10 months ago
- ☆16Updated 9 months ago
- Modeling code for a BitNet b1.58 Llama-style model.☆23Updated 11 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆227Updated 11 months ago
- ☆111Updated 3 months ago
- GPT-2 small trained on phi-like data☆65Updated last year
- ☆49Updated last year
- A pipeline parallel training script for LLMs.☆136Updated last week
- entropix style sampling + GUI☆25Updated 5 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆141Updated 6 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 5 months ago