Locutusque / TinyMistral-train-eval
The training notebooks that were similar to the original script used to train TinyMistral.
☆19Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for TinyMistral-train-eval
- Low-Rank adapter extraction for fine-tuned transformers model☆162Updated 6 months ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆155Updated last year
- ☆118Updated 3 months ago
- Train your own small bitnet model☆56Updated last month
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆196Updated 6 months ago
- 1.58-bit LLaMa model☆79Updated 7 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆129Updated 2 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 7 months ago
- A pipeline parallel training script for LLMs.☆83Updated this week
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆38Updated 5 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆142Updated 9 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated 10 months ago
- Full finetuning of large language models without large memory requirements☆93Updated 10 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆97Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆232Updated 5 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆173Updated 4 months ago
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- GPT-2 small trained on phi-like data☆65Updated 9 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆100Updated 6 months ago
- ☆53Updated 5 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆112Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆113Updated 3 weeks ago
- Video+code lecture on building nanoGPT from scratch☆64Updated 5 months ago
- ☆64Updated 5 months ago
- ☆93Updated last month
- Merge Transformers language models by use of gradient parameters.☆201Updated 3 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Tune MPTs☆84Updated last year
- entropix style sampling + GUI☆25Updated 3 weeks ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last month