Gryphe / BlockMerge_GradientLinks
Merge Transformers language models by use of gradient parameters.
☆207Updated 9 months ago
Alternatives and similar repositories for BlockMerge_Gradient
Users that are interested in BlockMerge_Gradient are comparing it to the libraries listed below
Sorting:
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆238Updated last year
- A bagel, with everything.☆320Updated last year
- ☆72Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆151Updated last year
- ☆157Updated 10 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆123Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 9 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆104Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆122Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated last year
- ☆95Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆32Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- Experiments on speculative sampling with Llama models☆126Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated 10 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆202Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆236Updated last year
- A pipeline for LLM knowledge distillation☆104Updated 2 months ago
- ☆76Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- batched loras☆343Updated last year
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆147Updated 9 months ago
- Open Source WizardCoder Dataset☆158Updated last year
- ☆269Updated 2 years ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago