flowritecom / flow-merge
flow-merge is a powerful Python library that enables seamless merging of multiple transformer-based language models using the most popular merge methods such as model soups, SLERP, ties-MERGING or DARE.
☆17Updated 3 months ago
Alternatives and similar repositories for flow-merge
Users that are interested in flow-merge are comparing it to the libraries listed below
Sorting:
- Unofficial Implementation of Evolutionary Model Merging☆38Updated last year
- A repository for research on medium sized language models.☆76Updated 11 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 2 weeks ago
- ☆48Updated 6 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 5 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 11 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 11 months ago
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- ☆53Updated 11 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆30Updated 2 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆121Updated last year
- Set of scripts to finetune LLMs☆37Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 8 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆44Updated this week
- ☆78Updated 6 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21Updated 11 months ago
- ☆25Updated 4 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆24Updated 3 months ago
- entropix style sampling + GUI☆26Updated 6 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- Prune transformer layers☆69Updated 11 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- ☆49Updated last year
- Plug in & Play Pytorch Implementation of the paper: "Evolutionary Optimization of Model Merging Recipes" by Sakana AI☆30Updated 6 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 9 months ago
- Lego for GRPO☆28Updated last month
- ☆33Updated 11 months ago
- ☆16Updated 2 months ago