IBM / ModuleFormer
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
☆215Updated 10 months ago
Alternatives and similar repositories for ModuleFormer:
Users that are interested in ModuleFormer are comparing it to the libraries listed below
- Code repository for the c-BTM paper☆105Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 9 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 6 months ago
- Experiments on speculative sampling with Llama models☆124Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆114Updated 8 months ago
- Simple next-token-prediction for RLHF☆222Updated last year
- batched loras☆338Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆266Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆252Updated 7 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆215Updated 3 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated 6 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆115Updated last year
- ☆94Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- Pre-training code for Amber 7B LLM☆162Updated 9 months ago
- DSIR large-scale data selection framework for language model training☆241Updated 10 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆140Updated 5 months ago
- Self-Alignment with Principle-Following Reward Models☆154Updated 11 months ago
- ☆150Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆101Updated 2 months ago
- Evaluating LLMs with CommonGen-Lite☆88Updated 11 months ago
- ☆192Updated 2 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆451Updated 11 months ago
- Evaluating LLMs with fewer examples☆145Updated 10 months ago
- Functional Benchmarks and the Reasoning Gap☆82Updated 4 months ago
- ☆117Updated 4 months ago