yuchenlin / LLM-BlenderView external linksLinks
[ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diverse strengths of multiple open-source LLMs. LLM-Blender cut the weaknesses through ranking and integrate the strengths through fusing generation to enhance the capability of LLMs.
☆976Oct 22, 2024Updated last year
Alternatives and similar repositories for LLM-Blender
Users that are interested in LLM-Blender are comparing it to the libraries listed below
Sorting:
- Tools for merging pretrained large language models.☆6,783Jan 26, 2026Updated 2 weeks ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,234May 8, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,477Jun 7, 2025Updated 8 months ago
- [NIPS2023] RRHF & Wombat☆808Sep 22, 2023Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,495Sep 8, 2025Updated 5 months ago
- AllenAI's post-training codebase☆3,573Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,742Jan 8, 2024Updated 2 years ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,768Aug 4, 2024Updated last year
- A modular RL library to fine-tune language models to human preferences☆2,377Mar 1, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,360Updated this week
- Salesforce open-source LLMs with 8k sequence length.☆724Jan 31, 2025Updated last year
- A framework for few-shot evaluation of language models.☆11,393Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,188Jul 11, 2024Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,407Apr 11, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,669Apr 17, 2024Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,885Updated this week
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,284Dec 22, 2025Updated last month
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,705Jun 25, 2024Updated last year
- Customizable implementation of the self-instruct paper.☆1,050Mar 7, 2024Updated last year
- Aligning pretrained language models with instruction data generated by themselves.☆4,573Mar 27, 2023Updated 2 years ago
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆2,022Jan 15, 2025Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,619Updated this week
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,924Dec 7, 2024Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆905Sep 30, 2025Updated 4 months ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- Large-scale, Informative, and Diverse Multi-round Chat Data (and Models)☆2,783Mar 13, 2024Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,143Jan 11, 2024Updated 2 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,946Aug 9, 2025Updated 6 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,835Jun 10, 2024Updated last year
- A library for advanced large language model reasoning☆2,330Jun 10, 2025Updated 8 months ago
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,711Feb 4, 2026Updated last week
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆473Mar 19, 2024Updated last year
- ☆1,559Feb 5, 2026Updated last week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,891May 3, 2024Updated last year
- Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)☆12,717Updated this week
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,463Nov 7, 2023Updated 2 years ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,084Jan 26, 2026Updated 2 weeks ago
- Large Language Model Text Generation Inference☆10,757Jan 8, 2026Updated last month