alexrs / herdLinks
Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.
☆12Updated 2 years ago
Alternatives and similar repositories for herd
Users that are interested in herd are comparing it to the libraries listed below
Sorting:
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- The repository contains code for Adaptive Data Optimization☆32Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 4 months ago
- ☆56Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- A repository for research on medium sized language models.☆77Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Updated 2 years ago
- Aioli: A unified optimization framework for language model data mixing☆32Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated 2 years ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆23Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated 2 years ago
- ☆27Updated 5 months ago
- ☆41Updated last year
- ☆68Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆122Updated last year
- Minimum Description Length probing for neural network representations☆20Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆46Updated 5 months ago
- Official repository for "BLEUBERI: BLEU is a surprisingly effective reward for instruction following"☆31Updated 8 months ago
- ☆20Updated 10 months ago
- ☆91Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 8 months ago
- Code repository for the c-BTM paper☆108Updated 2 years ago