Gryphe / MergeMonsterLinks
An unsupervised model merging algorithm for Transformers-based language models.
☆106Updated last year
Alternatives and similar repositories for MergeMonster
Users that are interested in MergeMonster are comparing it to the libraries listed below
Sorting:
- Low-Rank adapter extraction for fine-tuned transformers models☆177Updated last year
- Merge Transformers language models by use of gradient parameters.☆208Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- Let's create synthetic textbooks together :)☆75Updated last year
- GPT-2 small trained on phi-like data☆67Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- ☆116Updated 10 months ago
- ☆26Updated 2 years ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆55Updated 2 years ago
- Experimental LLM Inference UX to aid in creative writing☆123Updated 10 months ago
- ☆67Updated last year
- A pipeline parallel training script for LLMs.☆158Updated 5 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆76Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- ☆162Updated 2 months ago
- Genertaes control vectors for use with llama.cpp in GGUF format.☆32Updated 7 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- ☆73Updated 2 years ago
- Train Llama Loras Easily☆30Updated 2 years ago
- ☆136Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated 2 years ago
- 5X faster 60% less memory QLoRA finetuning☆21Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆93Updated last month
- A multimodal, function calling powered LLM webui.☆216Updated last year
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆42Updated last week
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 8 months ago