SakanaAI / evolutionary-model-merge
Official repository of Evolutionary Optimization of Model Merging Recipes
☆1,230Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for evolutionary-model-merge
- Code for Quiet-STaR☆651Updated 3 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,045Updated 6 months ago
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,692Updated last week
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,336Updated 7 months ago
- ReFT: Representation Finetuning for Language Models☆1,159Updated 2 weeks ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆960Updated 3 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆803Updated 3 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,435Updated 3 weeks ago
- Tools for merging pretrained large language models.☆4,816Updated 2 weeks ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆517Updated 2 weeks ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,840Updated 3 months ago
- A native PyTorch Library for large model training☆2,623Updated this week
- Schedule-Free Optimization in PyTorch☆1,898Updated 2 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆647Updated last month
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆278Updated 8 months ago
- Training LLMs with QLoRA + FSDP☆1,418Updated last week
- ☆935Updated 2 weeks ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆811Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,008Updated 10 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,391Updated 8 months ago
- System 2 Reasoning Link Collection☆693Updated 3 weeks ago
- OLMoE: Open Mixture-of-Experts Language Models☆460Updated this week
- Minimalistic large language model 3D-parallelism training☆1,260Updated this week
- ☆1,954Updated 3 weeks ago
- DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models☆835Updated 7 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,170Updated last week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆744Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,149Updated last month