arcee-ai / mergekitLinks
Tools for merging pretrained large language models.
☆6,231Updated 2 weeks ago
Alternatives and similar repositories for mergekit
Users that are interested in mergekit are comparing it to the libraries listed below
Sorting:
- A framework for few-shot evaluation of language models.☆9,955Updated last week
- Robust recipes to align language models with human and AI preferences☆5,338Updated last month
- PyTorch native post-training library☆5,444Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,863Updated last week
- AllenAI's post-training codebase☆3,144Updated this week
- Go ahead and axolotl questions☆10,324Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,563Updated last week
- Modeling, training, eval, and inference code for OLMo☆5,943Updated last week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,931Updated 4 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,852Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,235Updated 3 months ago
- ☆4,089Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,596Updated last year
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,101Updated 2 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,533Updated this week
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,591Updated 10 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,606Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,851Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,998Updated last year
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,397Updated 3 months ago
- Minimalistic large language model 3D-parallelism training☆2,164Updated last week
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,388Updated 5 months ago
- A quick guide (especially) for trending instruction finetuning datasets☆3,233Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,030Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,061Updated last week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆12,691Updated last week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,229Updated last month
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆6,943Updated this week
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,508Updated 6 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,847Updated 3 weeks ago