An unsupervised model merging algorithm for Transformers-based language models.
☆108Apr 29, 2024Updated last year
Alternatives and similar repositories for MergeMonster
Users that are interested in MergeMonster are comparing it to the libraries listed below
Sorting:
- Merge Transformers language models by use of gradient parameters.☆214Aug 8, 2024Updated last year
- EXL2 quantization generalized to other models.☆10Mar 17, 2024Updated 2 years ago
- Model REVOLVER, a human in the loop model mixing system.☆33Aug 2, 2023Updated 2 years ago
- QuIP quantization☆62Mar 17, 2024Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240May 26, 2024Updated last year
- Using fourier interpolation to merge large language models☆11Jan 6, 2026Updated 2 months ago
- Large-scale LLM inference engine☆1,677Mar 12, 2026Updated last week
- Tools for merging pretrained large language models.☆6,867Updated this week
- Efficient 3bit/4bit quantization of LLaMA models☆18May 18, 2023Updated 2 years ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆263Apr 23, 2024Updated last year
- A community list of common phrases generated by GPT and Claude models☆79Nov 19, 2023Updated 2 years ago
- Modified Beam Search with periodical restart☆12Sep 12, 2024Updated last year
- ☆11Dec 11, 2024Updated last year
- Image Diffusion block merging technique applied to transformers based Language Models.☆56May 8, 2023Updated 2 years ago
- automatically quant GGUF models☆219Dec 23, 2025Updated 2 months ago
- An experiment to see if chatgpt can improve the output of the stanford alpaca dataset☆12Mar 29, 2023Updated 2 years ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆146Sep 10, 2023Updated 2 years ago
- 收集优质的角色扮演聊天数据 | Collection of roleplay conversations of high quality☆15Dec 1, 2024Updated last year
- Fork of kingoflolz/mesh-transformer-jax with memory usage optimizations and support for GPT-Neo, GPT-NeoX, BLOOM, OPT and fairseq dense L…☆22Nov 14, 2022Updated 3 years ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆23Mar 12, 2024Updated 2 years ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Oct 9, 2024Updated last year
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Aug 21, 2023Updated 2 years ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆233Oct 31, 2024Updated last year
- KoTAN: Korean Translation and Augmentation with fine-tuned NLLB☆23Jan 4, 2024Updated 2 years ago
- A repository of prompts and Python scripts for intelligent transformation of raw text into diverse formats.☆31May 29, 2023Updated 2 years ago
- Prompt Jinja2 templates for LLMs☆35Jul 9, 2025Updated 8 months ago
- A collection of simple transformer based chatbots.☆19Dec 5, 2022Updated 3 years ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆34Aug 14, 2024Updated last year
- Create Custom LLMs☆1,820Nov 8, 2025Updated 4 months ago
- Produce your own Dynamic 3.0 Quants and achieve optimum accuracy & SOTA quantization performance! Input your VRAM and RAM and the toolcha…☆82Updated this week
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Mar 2, 2024Updated 2 years ago
- ☆93Dec 9, 2025Updated 3 months ago
- ☆16Feb 10, 2023Updated 3 years ago
- A bagel, with everything.☆326Apr 11, 2024Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Dec 30, 2023Updated 2 years ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆253Oct 30, 2024Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Mar 30, 2023Updated 2 years ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,468Mar 4, 2026Updated 2 weeks ago