arcee-ai / mergekitView external linksLinks
Tools for merging pretrained large language models.
☆6,783Jan 26, 2026Updated 2 weeks ago
Alternatives and similar repositories for mergekit
Users that are interested in mergekit are comparing it to the libraries listed below
Sorting:
- Go ahead and axolotl questions☆11,289Updated this week
- Robust recipes to align language models with human and AI preferences☆5,495Sep 8, 2025Updated 5 months ago
- A framework for few-shot evaluation of language models.☆11,393Updated this week
- Train transformer language models with reinforcement learning.☆17,360Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,084Jan 26, 2026Updated 2 weeks ago
- Codebase for Merging Language Models (ICML 2024)☆864May 5, 2024Updated last year
- Official repository of Evolutionary Optimization of Model Merging Recipes☆1,395Nov 29, 2024Updated last year
- Large Language Model Text Generation Inference☆10,757Jan 8, 2026Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆70,205Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,885Updated this week
- Minimalistic large language model 3D-parallelism training☆2,544Dec 11, 2025Updated 2 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,439Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,440Dec 9, 2025Updated 2 months ago
- Fast and memory-efficient exact attention☆22,231Updated this week
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.☆51,922Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,619Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,952Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,155Feb 8, 2026Updated last week
- DSPy: The framework for programming—not prompting—language models☆32,156Updated this week
- PyTorch native post-training library☆5,669Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,835Jun 10, 2024Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,891May 3, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,477Jun 7, 2025Updated 8 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,028Apr 11, 2025Updated 10 months ago
- Structured Outputs☆13,403Feb 6, 2026Updated last week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks