54rt1n / shardmergeLinks
Using fourier interpolation to merge large language models
☆11Updated 9 months ago
Alternatives and similar repositories for shardmerge
Users that are interested in shardmerge are comparing it to the libraries listed below
Sorting:
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- Genertaes control vectors for use with llama.cpp in GGUF format.☆32Updated 6 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 7 months ago
- ☆136Updated last year
- ☆26Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers models☆177Updated last year
- ☆49Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated 3 weeks ago
- Modeling code for a BitNet b1.58 Llama-style model.☆25Updated last year
- ☆55Updated 11 months ago
- ☆62Updated 3 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- entropix style sampling + GUI☆27Updated 11 months ago
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆92Updated 4 months ago
- ☆39Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆42Updated last year
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- A pipeline parallel training script for LLMs.☆157Updated 5 months ago
- Cerule - A Tiny Mighty Vision Model☆67Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆32Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated 2 years ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 5 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 8 months ago
- Lego for GRPO☆30Updated 4 months ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆115Updated last month
- smolLM with Entropix sampler on pytorch☆150Updated 11 months ago