yuchenlin / LLM-BlenderLinks
[ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diverse strengths of multiple open-source LLMs. LLM-Blender cut the weaknesses through ranking and integrate the strengths through fusing generation to enhance the capability of LLMs.
☆973Updated last year
Alternatives and similar repositories for LLM-Blender
Users that are interested in LLM-Blender are comparing it to the libraries listed below
Sorting:
- Code for fine-tuning Platypus fam LLMs using LoRA☆631Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆556Updated 2 years ago
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,136Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆551Updated last year
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆821Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,652Updated last year
- Official repository for LongChat and LongEval☆533Updated last year
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".☆1,140Updated 2 years ago
- FacTool: Factuality Detection in Generative AI☆900Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆598Updated 2 years ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆839Updated last year
- LOMO: LOw-Memory Optimization☆991Updated last year
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,144Updated 3 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆824Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆733Updated last year
- ☆769Updated last year
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆763Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,925Updated 4 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆663Updated last year
- ☆379Updated 2 years ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆663Updated last year
- Generative Representational Instruction Tuning☆680Updated 6 months ago
- PaL: Program-Aided Language Models (ICML 2023)☆518Updated 2 years ago
- A central, open resource for data and tools related to chain-of-thought reasoning in large language models. Developed @ Samwald research …☆1,007Updated last year
- Forward-Looking Active REtrieval-augmented generation (FLARE)☆662Updated 2 years ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆538Updated last year
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,043Updated last year
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,107Updated last year
- Customizable implementation of the self-instruct paper.☆1,050Updated last year
- ☆277Updated 2 years ago