54rt1n / shardmergeLinks
Using fourier interpolation to merge large language models
☆11Updated 5 months ago
Alternatives and similar repositories for shardmerge
Users that are interested in shardmerge are comparing it to the libraries listed below
Sorting:
- Genertaes control vectors for use with llama.cpp in GGUF format.☆24Updated 2 months ago
- The training notebooks that were similar to the original script used to train TinyMistral.☆21Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- ☆49Updated 6 months ago
- Lego for GRPO☆28Updated last week
- ☆27Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- ☆13Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆32Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆104Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆43Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆139Updated 3 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆30Updated 2 months ago
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- entropix style sampling + GUI☆26Updated 7 months ago
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- ☆130Updated 9 months ago
- ☆66Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- Cerule - A Tiny Mighty Vision Model☆66Updated 9 months ago
- Easily view and modify JSON datasets for large language models☆75Updated 2 weeks ago
- Train your own small bitnet model☆71Updated 7 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Tokun to can tokens☆17Updated this week
- ☆14Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- ☆43Updated 3 months ago